EKSU Effect of Tube Cuff Pressure on Postoperative Sore Throat Paper make assignment for 2000 word .in The effect of tube cuff pressure on postoperative sore throat as guideline that in attached file .with Harvard style and there are some articles you might be useful for assignment.this my aim about this search The aim is showed how can to reduce the effect of cuff pressures in sore throat after surgery in adult and children and the effectiveness of measures to reduce its incidence. perceptual
edge
Why Most Dashboards Fail
Stephen Few, Perceptual Edge
Author of Information Dashboard Design: The Effective Visual Communication of Data
Rise of the Dashboard
Dashboards can provide a powerful solution to information
overload, but only when they are properly designed. Most
dashboards that are used in businesses today fail. At best they
deliver only a fraction of the insight that is needed to monitor
the business. This is a travesty, because effective dashboard
design can be achieved by following a small set of visual design
principles that can be easily learned.
graphical luster, despite the fact that displays of this type usually
say little, and what they manage to say, they say poorly. Only
those who cut through the hype and learn practical dashboard
design skills will produce dashboards that actually work.
Several circumstances have recently merged to allow dashboards to bring real value to the workplace. These circumstances include technologies such as high-resolution graphics,
emphasis since the 1990s on performance management and
“Most dashboards that are used in businesses today fail. At best they deliver only
a fraction of the insight that is needed to monitor the business.”
Let me back up a little and put this in context. Few phenomena
characterize our time more uniquely and pervasively than the
rapid rise and influence of information technologies. These technologies have unleashed a tsunami of data that rolls over and
flattens us in its wake. Taming this beast has become a primary
goal of the information industry. One tool that has emerged from
this effort in recent years is the dashboard. This single-screen
display of the most important information needed to do a job,
designed for rapid monitoring, is a powerful new medium of data
presentation. At least it can be, but only when properly designed.
Most dashboards that are used in business today, however, fall
far short of their potential.
metrics, and a growing recognition of visual perception as a
powerful channel for information acquisition and comprehension.
Dashboards offer a unique solution to the problem of information
overload, not a complete solution by any means, but one that
can help a lot. Much of the problem can be traced back to the
vendors that develop and sell dashboard products. They work
hard to make their dashboards shimmy with sex appeal. They
taunt, “You don’t want to be the only company in your neighborhood without one, do you?” They whisper sweetly, “Still haven’t
achieved the expected return on investment (ROI) from your
expensive data warehouse? Just stick a dashboard in front of
it and watch the money pour in.” Those gauges, meters, and
“…beyond the hype and sizzle lives a unique and effective solution to a very real need
for information. This is the dashboard that deserves to live on your screen.”
The root of the problem is not technology—at least not primarily—but poor data presentation. To serve their purpose and
fulfill their potential, dashboards must display a dense array of
information in a small amount of space in a manner that communicates clearly and immediately. This requires design that
taps into and leverages the power of visual perception and the
human brain to sense and process several chunks of information rapidly. This can only be achieved when the visual design
of dashboards is central to the development process and is informed by a solid understanding of visual perception and human
cognition—what works, what doesn’t, and why. No technology
can do this for you. Someone must bring design expertise to the
process.
Dashboards are unique in several exciting and useful ways, but
despite the hype surrounding them, surprisingly few present information effectively. People believe that dashboards must look
flashy, filled with eye-catching gauges and charts, sizzling with
traffic lights are so damn cute, but their appeal is only skin deep.
Rather than creating a demand for superficial flash, vendors
ought to be learning from the vast body of information visualization research that already exists, and then developing and
selling tools that actually work. Rest assured that beyond the
hype and sizzle lives a unique and effective solution to a very
real need for information. This is the dashboard that deserves to
live on your screen.
The Dashboard Design Challenge
The fundamental challenge of dashboard design is to display all
the required information on a single screen:
• clearly and without distraction
• in a manner that can be quickly examined and understood
Think about the cockpit of a commercial jet. Years of effort went
into its design to enable the pilot to see what’s going on at a
Copyright © 2007 Stephen Few, Perceptual Edge
glance, even though there is much information to monitor. Every
time I board a plane, I’m grateful that knowledgeable designers
worked hard to present this information effectively. Similar care
is needed for the design of our dashboards. This is a science
that few of those responsible for creating dashboards have
studied.
The process of visual monitoring involves a series of sequential
steps that the dashboard should be designed to support. The
user should begin by getting an overview of what’s going on and
quickly identifying what needs attention. Next, the user should
look more closely at each of those areas that need attention to
be able to understand them well enough to determine if something should be done about them. Lastly, if additional details are
needed to complete the user’s understanding before deciding
how to respond, the dashboard should serve as a seamless
launch pad to that information, and perhaps even provide the
means to initiate automated responses, such as sending emails
to those who should take action.
good or bad? Are we on track? Is this better than before? The
right context for the key measures makes the difference between
numbers that just sit there on the screen and those that enlighten and inspire action.
Quantitative scales on a graphic, such as those suggested
by the tick marks around these gauges, are meant to help us
interpret the measures, but they can only do so when scales
are labeled with numbers, which these gauges lack. Many of
the visual attributes of these gauges, including the eye-catching
lighting effects that are used to make them look like real gauges,
tell us nothing whatsoever.
Now take a look at an example below, taken from a small
section of a dashboard that I designed. It includes methods of
display that are probably unfamiliar, so let me take a moment to
introduce them to you. The lines in the column labeled “Past 12
months” are called sparklines. They enhance what is often displayed using trend arrows by actually showing changes through
“Elegance in communication is often achieved through simplicity of design.”
Clearly presenting everything on a single screen requires a
careful design and conscious planning; even the slightest lack of
organization will result in a confusing mess. You must condense
the information, you must include only what you absolutely need,
and you must use display media that can be easily read and
understood. Most dashboard software features display media
that “look marvelous” but communicate little. If the information
you need is obscured by visual fluff or is delivered in fragments,
the dashboard fails. Anything that doesn’t add meaning to the
data must be thrown out, especially those flashy visual effects
that have become so popular despite their undermining affect on
communication. Elegance in communication is often achieved
through simplicity of design. This is certainly true of dashboards.
Seeing Is Believing
Rather than trying to convince you with words, let me show you
what I mean. Here is a series of three gauges, which I extracted
from a sample dashboard that was created using the most popular dashboard product available today:
time in the ups and downs of a line—in this case 12 months of
data. They provide historical context for what’s happening now.
The small charts in the “% of Target” column are called bullet
graphs. I created these to replace the gauges and meters that
are typically used in dashboards with a richer form of display
that requires much less space. The prominent horizontal bar
is the metric, the small vertical line is a comparative measure
(a target in this case), and the varying intensities of gray in the
background indicate the qualitative states of poor, satisfactory,
and good. The small red icon that appears next to Profit makes
it easy to spot this item, which urgently needs your attention. Because no colors other than blacks and grays appear anywhere
in the display other than the red icon, nothing distracts you from
quickly finding what needs your attention most, with nothing
more than a glance.
Key Metrics YTD
Past 12 Months
Actual
Metric
Target
Poor
Satisfactory
Good
% of Target
Actual
Revenue
$913,394
Profit
$193,865
Avg Order Size
$5,766
On Time Delivery
104%
New Customers
247
Cust Satisfaction
4.73
Market Share
19%
0%
Let’s focus only on the center gauge for a moment. If you relied
on this gauge to monitor the current state of quarter-to-date
sales, the value 7,822 YTD units, without additional context
would tell you little. Compared to what? Assuming that you
understand that a green needle on the gauge means that this
value is good (and you are not color blind, which 10% of men
and 1% of women are), your next question ought to be, how
50%
100%
150%
Rather than only three metrics, which appear in the previous
example, this example displays seven key metrics, and each has
been enriched with historical context and compared to performance targets, all in roughly the same amount of space. I hope
that this single is enough to show that there is a world of difference between dashboards that look flashy and those that give
you the information that you need at a glance. For the full story, I
invite you to read my book, Information Dashboard Design: The
Effective Visual Communication of Data, or to visit my website at
www.PerceptualEdge.com.
Copyright © 2007 Stephen Few, Perceptual Edge
The International Journal of Human Resource Management,
Vol. 20, No. 1, January 2009, 57–74
Implicit human resource management theory: a potential threat to the
internal validity of human resource practice measures
Timothy M. Gardnera* and Patrick M. Wrightb
a
Vanderbilt University, Nashville, USA; bCornell University, Ithaca, NY, USA
Since the publication of Huselid’s (1995) paper examining the relationship between HR
practices and firm performance, there has been an explosion of published papers examining
the empirical relationship between HR practices and various measures of firm performance.
This study examines the possibility that informants typically providing data about
organizational HR practices may be biased by an implicit theory of human resource
management. Our findings suggest the responses from subjects typically providing data about
HR practices may be biased in their reporting by the performance of the organization.
The generalizability of these results is considered and implications for future studies of the
HR-firm performance relationship reviewed.
Keywords: construct validity; mental models; research methods; strategic human resource
management
Recent research in the field of Strategic Human Resource Management (SHRM) has explored
the substance and impact of organizational human resource strategies. This research has
examined both the impact of individual HR practices on firm outcomes, such as compensation
(Gerhart and Milkovich 1990) and employee selection (Terpstra and Rozell 1993), and the effect
of sets of human resource practices on firm performance (Huselid 1995; MacDuffie 1995; Delery
and Doty 1996; Ichniowski, Shaw and Prennushi 1997; Ngo, Turban, Lau and Lui 1998; Shaw,
Delery, Jenkins and Gupta 1998; Hoque 1999; Guthrie 2001; Paul and Anantharaman 2003).
This stream of research has documented statistically and practically significant relationships
between various measures of human resource practices and business unit and/or firm outcomes.
Effect sizes in these studies typically indicate that a one standard deviation increase in the
use/quality of a set HRM practices is associated with approximately a 20% increase in profits
(return on assets) (Becker and Huselid 1998; Gerhart, Wright, McMahan and Snell 2000b;
Paauwe and Boselie 2005).
While extremely promising, this research, with few exceptions, has relied on survey
responses from one knowledgeable informant per company to measure the content and quality of
firms’ human resource management systems. Reliance on just one informant makes the
measurement of the human resource management construct susceptible to excessive random (i.e.,
unreliability) and systematic (i.e., bias) measurement error. Research by Gerhart (1999), Gehart
et al. (2000b) and Gerhart, Wright and McMahan (2000a) points to the potentially problematic
nature of the construct validity of measures of HR practices, particularly with regard to random
measurement error. Gerhart et al. (2000a) replicated a typical SHRM study and estimated that
ICC(1,1), a measure of the reliability of a single informant, to be 0.16; significantly lower than
Nunnally and Bernstein’s (1994) recommended minimum of .70. Wright et al. (2001a) examined
*Corresponding author. Email: tim.gardner@vanderbilt.edu
ISSN 0958-5192 print/ISSN 1466-4399 online
q 2009 Taylor & Francis
DOI: 10.1080/09585190802528375
http://www.informaworld.com
58
T.M. Gardner and P.M. Wright
the interrater reliability of HR practice measures using data from three different SHRM studies
and observed an average item ICC(1,1) of 0.25.
Thus, every study that has examined measurement error in measures of HR practices has
demonstrated that significant amounts exist, particularly when the measure is taken from a single
respondent. Random measurement error leads to a downward bias in observed relationships.
If the bulk of the measurement error is random, this would imply that the ‘true’ impact of HR
practices on firm financial outcomes may be significantly greater than current empirical research
suggests. However, the measurement of human resource constructs is also susceptible to
systematic measurement error. Systematic error is a consistent bias in a measure, and it can
either inflate or deflate an observed relationship. This type of error may occur if respondents
report HR practices based not on accurate and valid estimates, but rather based on an implicit
theory of human resource management. For example, an implicit theory that high performing
firms are engaged in progressive HR practices while low performing firms are not engaged in
such practices, if it affects subjects’ responses to HR surveys, could produce an artificially high
correlation between HR practices and firm performance. However, to date, no empirical data
exists suggesting that respondents might hold such an implicit theory, nor that this implicit
theory might impact their responses.
Thus, the purpose of this study is to examine if one form of systematic bias, implicit human
resource management theory, can impact measures of HR practices. We seek to answer two
specific questions; (1) Do typical respondents to HR practice surveys in a field setting hold
implicit theories regarding the nature of human resource practices? (2) Can implicit theories
affect how research subjects describe organizational human resource practices? In order to
answer these questions we first review the theoretical rationale and empirical evidence for the
impact of implicit theories on subjects’ responses in other areas of management research.
Review of the literature
Implicit theories and their impact in organizational research
The most commonly considered form of systematic bias in organizational research is perceptpercept inflation. Percept-percept inflation results when subjects provide information for the
independent and dependent variables at the same point in time (Gerhart 1999). This type of bias
is less of a threat to the research on HR practices and firm performance because, with only a few
notable exceptions (see, for example, Delaney and Huselid 1996; Bae and Lawler 2000; Guthrie
2001), most of the major SHRM studies have collected information regarding firm performance
from a source different than the respondent providing information regarding HR practices.
However, a second, and less frequently considered possible source of systematic bias is the
implicit theories of the informants. Informants, such as researchers, have implicit theories of
human resource management. As organizational research is rarely fully counterintuitive,
informant theories of HRM are likely quite similar to researchers’ theories (Staw 1975). When
responding regarding the characteristics of the organization on a survey, implicit theories may
bias the recall of information in a way consistent with the theory the researcher is trying to test.
Below we examine the theory underlying the role of implicit theories in organizational research.
Attribution theory and implicit theories
Attribution theory (Kelly 1973) attempts to explain how people make causal explanations of the
world around them and the consequences of these beliefs on behavior. The theory assumes that
all individuals behave as naı̈ve scientists seeking to understand the causes of salient outcomes.
Possible causes that appear to covary with the effect of interest over time are attributed as likely
The International Journal of Human Resource Management
59
causes of the effect. The final choice of a cause or causes is based on the subject’s experience in
observing cause-and-effect relationships, quasi-experiments in which subjects manipulate
possible causal factors, and from implicit and explicit teachings of the causal nature of the world
(Kelly 1973, p. 115).
There is a strong conceptual basis for believing that implicit theories affect the responses
subjects provide in management research. Completing a survey for management research
involves a complex sequence of information processing events. Whether providing objective
information or subjective evaluations, subjects must be exposed to the stimulus of interest,
attend to the stimulus, encode, and store the information. There is usually a gap between the time
the information is stored and retrieved for the purpose of completing a survey. Once retrieved
from memory, the information is recorded on the questionnaire. It is unlikely informants encode,
store, and retrieve the desired information with perfect accuracy. Even in the absence of memory
decay, the entire process poses substantial information processing demands. To reduce these
demands, subjects rely on implicit theories to cue the salient information, structure it into
coherence, and fill in gaps of missing information (Rush, Thomas and Lord 1977). Thus, when
informants retrieve subjective or objective information about their organization that corresponds
to an implicit theory of firm performance, the information is likely to be biased consistent with
this theory in the direction of the (perceived) performance of the firm (Eden and Leviatan 1975;
Downey, Chacko and McElroy 1979; Martell and Guzzo 1991; Martell, Guzzo and Willis 1995;
Gerhart 1999).
This chain of events is especially likely to affect subjects providing information on measures
for which it is extremely difficult to gather information such as HR practices. Typically, SHRM
researchers are interested in the degree of enactment of actual HR practices as opposed to the
existence of stated policies (Huselid and Becker 2000). In small organizations (100 to 200
employees), asking the senior HR person about the percentage of managerial, professional, and
non-supervisory employees actively managed with customized sets of…
Purchase answer to see full
attachment