Positivetransferandnegativetransferantilearningofproblem-solvingskills..pdf

Positive Transfer and Negative Transfer/Antilearning of
Problem-Solving Skills

Magda Osman
University College London

In problem-solving research, insights into the relationship between monitoring and control in the transfer
of complex skills remain impoverished. To address this, in 4 experiments, the authors had participants
solve 2 complex control tasks that were identical in structure but that varied in presentation format.
Participants learned to solve the 2nd task on the basis of their original learning phase from the 1st task
or learned to solve the 2nd task on the basis of another participant’s learning phase. Experiment 1 showed
that, under conditions in which the participant’s learning phase was experienced twice, performance
deteriorated in the 2nd task. In contrast, when the learning phases in the 1st and 2nd tasks differed,
performance improved in the 2nd task. Experiment 2 introduced instructional manipulations that induced
the same response patterns as those in Experiment 1. In Experiment 3, further manipulations were
introduced that biased the way participants evaluated the learning phase in the 2nd task. In Experiment
4, judgments of self-efficacy were shown to track control performance. The implications of these findings
for theories of complex skill acquisition are discussed.

Keywords: induction, self-regulation, monitoring and control, observation versus action, skill learning

Central to skill development are two interrelated behaviors:
control and monitoring. These behaviors generate and track pro-
cesses involved in pursuing and fulfilling goals (e.g., Bandura &
Locke, 2003; Burns & Vollmeyer, 2002; Lerch & Harter, 2001;
Locke & Latham, 2002; Rossano, 2003; Sweller, 1988; VanLehn,
1996). Monitoring refers to online awareness and self-evaluation
of one’s goal-directed actions. Control refers to the generation and
selection of goal-directed actions. However, studies of skill learn-
ing in complex dynamic problem-solving tasks have focused al-
most exclusively on understanding control behaviors while ne-
glecting monitoring behaviors. Without understanding how
individuals monitor their behavior, little can be said about how
evaluative processes are used when transferring learned skills to
achieve unpracticed goals.

For example, Pilot A is training to fly a Boeing plane. In a flight
simulation, Pilot A flies the plane on a 2-hr night flight. The
schedule includes the tutor replaying Pilot A his or her flight
profile, to help instructors assess his or her performance. Pilot B
experiences the same initial training routine as Pilot A except that,
after his or her flight, he or she is played Pilot A’s flight profile,
not his or her own. A final briefing session reviews both pilots’

competence and assesses how to transfer their training successfully
to new flight patterns. Such training procedures are commonly
used in educational (e.g., Pintrich & DeGroot, 1990), clinical (e.g.,
Giesler, Josephs, & Swann, 1996), and military domains (e.g., Hill,
Gordon, & Kim, 2004) to enable individuals to identify, correct,
and improve their behaviors. In the example, both pilots share a
precise goal that involves accurately and reliably controlling a
complex dynamic control task (CDC-task: i.e., the aircraft). The
critical difference is that Pilot A’s training and assessment are
based on self-generated behavior, whereas Pilot B’s assessment is
based on comparing self- and other-generated training behavior.
The critical question that is raised by this example is as follows:
How will the two pilots’ different learning experiences impact on
their later ability to transfer their knowledge to similar and differ-
ent goals? In a series of analogous CDC-tasks, this study addresses
a related and, as yet, unexplored question: How does monitoring
affect the transfer of control behaviors in a complex skill learning
task? More specifically, how does self-evaluation of one’s goal-
directed actions (task knowledge and performance) influence what
is successfully transferred from one task to an analogous task? To
answer these questions, this study introduces a theoretical frame-
work, developed from Burns and Vollmeyer’s (2002) dual-space
hypothesis and Bandura’s (1986, 1991) social cognitive theory,
which relates monitoring to control processes. It proposes that
people track and assess the effectiveness of their skill learning in
complex dynamic learning environments. Negative evaluations
will prevent relevant skill knowledge from being applied to prac-
ticed and unpracticed goals, whereas positive assessments will
enable the transfer of relevant skilled knowledge to different goals.

Monitoring: Self-Regulatory Mechanisms

Studies of skill acquisition show that monitoring is critical to the
acquisition of complex behaviors, from athletic and musical per-

Preparation of this article was supported by Economic and Social
Research Council (ESRC) Grant RES-000-27-0119. The support of the
ESRC is gratefully acknowledged. The work was also part of the program
of the ESRC Research Centre for Economic Learning and Human Evolu-
tion. I thank Yousef Osman, David Shanks, Maarten Speekenbrink, Chris
Berry, Belen Lopez, Andrea Smyth, Yana Weinstein, Joaquin Moris, David
Lagnado, Bob Hausmann, Bjoern Meder, Momme von-Sydow, York Hag-
mayer, and Michael Waldmann for their inspired comments and encour-
agement.

Correspondence concerning this article should be addressed to Magda
Osman, Department of Psychology, University College London, Gower
Street, London, WC1H 0AP England. E-mail: [email protected]

Journal of Experimental Psychology: General Copyright 2008 by the American Psychological Association
2008, Vol. 137, No. 1, 97–115 0096-3445/08/$12.00 DOI: 10.1037/0096-3445.137.1.97

97

formance to managerial decision making and stock brokering
(Bandura, 1991; Bandura & Locke, 2003; Ericsson & Lehman,
1996; Karoly, 1993; Rossano, 2003; Stanovich, 2004). Why?
Essentially, skilled behaviors are goal-directed pursuits, and mon-
itoring thus serves a regulatory function, tracking and selecting out
relevant information bearing on a desired outcome. One way in
which this is demonstrated is by tracking ongoing performance
through error detection (Bandura, 1991; Bandura & Locke, 2003;
Karoly, 1993; Lehmann & Ericsson, 1997; Rossano, 2003). Error
detection, or reactive control, is one of two self-regulatory mech-
anisms (reactive control, proactive discrepancy) that Bandura’s
(1986, 1991) social cognitive theory proposes people use. The
reactive control mechanism is used to evaluate and then adjust
peoples’ behavior in to reach a goal (Bandura & Locke,
2003; Karoly, 1993). The second type of regulatory mechanism,
known as proactive discrepancy, involves people tracking the
current status of their performance and then incrementally setting
more and more difficult challenges. Through this, people can reach
and even exceed their initial targets. In essence, the theory pro-
poses that monitoring involves making online judgments about
one’s behavior and its relationship to a goal and that this process
is necessary in the acquisition and execution of skilled behaviors.
This study examines whether it also follows that the self-
regulatory mechanisms proposed by social cognitive theory will
influence the transference of control skills to different tasks.

Regulatory Mechanisms Through Self-Observation

In the example, the training regime that the pilots follow in-
volves error correction and detection through observation. One
pilot observes another’s flight simulation behavior; the other ob-
serves his own behavior. The latter is known as the self-
observation technique and is used extensively in educational (e.g.,
Covington, 2000; Pintrich & DeGroot, 1990) and clinical domains
(e.g., Bailey & Sowder, 1970; Dowrick, 1983; Giesler et al., 1996)
to identify and improve on maladaptive behaviors. For example,
developmental studies (Fireman & Kose, 1991, 2002; Fireman,
Kose, & Solomon, 2003) have reported that children improve their
problem-solving ability by examining videotaped presentations of
their previous attempts. In Fireman et al.’s (2003) study, children
completed the Tower of Hanoi (TOH) task and were then shown
their own moves, or another child’s previous inefficient moves, or
another child’s correct completion of the task. Presented with a
new TOH task, the children who had observed their own previous
behaviors performed best.

Similarly, the self-observation technique has been found to
improve a range of skills (e.g., meta-perception, motor learning,
dart throwing) in adults (e.g., Albright & Malloy, 1999; Carroll &
Bandura, 1982; Fireman & Kose, 1991, 2002; Knoblich & Flach,
2001). These studies indicate that the technique encourages people
to use monitoring behaviors of the kind described by Bandura and
Locke (2003), in which detection of inefficient behaviors can be
corrected and efficient behaviors exploited. The limitation of stud-
ies that have used the technique thus far is that they have focused
on people’s detection of and improvement to their behaviors while
observing themselves in action, which provides no insight into
how people monitor and correct internally represented behaviors,
such as decision-making, reasoning, and hypothesis testing behav-
iors. This study examines monitoring and its effects on transfer of

skilled behaviors by reexposing problem solvers to products of
their own strategic thinking, rather than to visual (i.e., video)
presentation of themselves performing a task. It is thus possible to
empirically control the information on which their self-regulatory
mechanisms operate and to examine the impact on the transfer of
control behaviors.

Complex Dynamic Control Tasks (CDC-Tasks)

CDC-tasks, like the one referred to in the example, have been a
popular task environment (Brehmer, 1992; Cañas, Quesada, An-
toli, & Fajardo, 2003; Funke, 2001; Kerstholt, 1996; Lipshitz,
Klein, Orasanu, & Salas, 2001) for examining the acquisition and
transfer of control skills in dynamic goal-directed environments.
The simulated environments used (e.g., air-traffic control, subway
systems) often relate closely to genuine control systems and thus
provide strong ecological validity (Buchner & Funke, 1993). Typ-
ically, a CDC-task (e.g., water purification system) includes sev-
eral inputs (salt, carbon, lime) that are connected via a complex
structure or rule to several outputs (chlorine concentration, tem-
perature, oxygenation; see Figure 1). Common to studies that use
CDC-tasks is the inclusion of a learning phase, in which learners
familiarize themselves with the system. Here learners interact with
a CDC-task by changing the inputs. They are able to learn about
the input– output relations by using the continuous feedback re-
ceived on the output variables that change as a result of the
changes to the inputs. In the test phase, the participants operate the
system and demonstrate their ability to control it by achieving a
specific goal.

As a problem-solving skill, controlling a dynamic system nec-
essarily involves reaching and maintaining goals. Thus, one ap-
proach to understanding control behaviors in CDC-tasks compares
different types of goal instructions during learning (e.g., Burns &
Vollmeyer, 2002; Osman, in press; Sweller, 1988; Vollmeyer,
Burns, & Holyoak, 1996). For instance, instructions like “explore
the system,” a nonspecific goal, are contrasted with “learn about
the system while trying to reach and maintain specific output
values,” a specific goal. In the test phase, specific goal learners
perform more poorly than do nonspecific goal learners (e.g., Burns
& Vollmeyer, 2002; Geddes & Stevenson, 1997; Sweller & Le-

Figure 1. Water tank system with inputs (salt, carbon, lime) and outputs
(oxygenation, chlorine concentration, temperature). The complex dynamic
control task shown in the figure is based on Burns and Vollmeyer’s (2002)
water tank purification plant task.

98 OSMAN

vine, 1982; Trumpower, Goldsmith, & Guynn, 2004; Vollmeyer et
al., 1996).

Control Behaviors in CDC-Tasks

Burns and Vollmeyer’s (2002) extension of dual-space theory
(Klahr & Dunbar, 1988; Simon & Lea, 1974) has been used to
explain the goal-specificity effect and other problem-solving be-
haviors in CDC-tasks. Burns and Vollmeyer proposed that skilled
control behaviors are acquired by using the principles underlying
scientific discovery. The CDC-task is described as analogous to a
hypothesis testing environment with two spaces: the rule space,
which determines the relevant relationship between inputs and
outputs, and the instance space, which includes examples of the
rule being applied. Successful control skills develop because ex-
ploration encourages both hypothesis generation and testing,
whereas under goal-specific conditions learners simply generate
instances that fulfill goals, with no opportunity to formulate hy-
potheses. Crucially, Burns and Vollmeyer left open the possibility
that monitoring has a mediating role in the acquisition of control
behaviors. They posited that self-evaluative processes are recruited
during hypothesis testing to track the hypotheses being tested and
to update them accurately from the results of these tests.

In contrast, the dissociationist approach (Berry, 1991; Berry &
Broadbent, 1984, 1987, 1988; Dienes & Berry, 1997; Lee, 1995;
Stanley, Mathews, Buss, & Kotler-Cope, 1989) proposes that the
knowledge acquired in CDC-tasks is procedural and represents
“knowing how” to perform actions tied to specific goals. This is
independent of declarative knowledge, which is “knowing that” of
particular facts about the underlying actions, and structural knowl-
edge of the environment being operated. These forms of knowl-
edge are not only independent of each other: It is also claimed that
functionally separate cognitive mechanisms support them (see
Osman, 2004, for a review). One method used to demonstrate this
involves training people on a procedural task by observing another
perform it first: The observers are described as generating declar-
ative knowledge because they are explicitly monitoring the action
of another (e.g., Kelly & Burton, 2001; Kelly, Burton, Riedel, &
Lynch, 2003). Berry (1991) and Lee (1995) used this method to
compare the effects of procedural-based and observation-based
learning. They showed that, when participants later came to prob-
lem solve, the observers’ ability to perform the procedural task was
poorer than that of procedural-based learners. They claimed mon-
itoring has a detrimental affect on control behaviors in CDC-tasks
and that acquisition of control behaviors is dependent on active
interaction with the CDC-task.

Present Study

Social cognitive theory and dual-space theory assume that mon-
itoring behaviors are necessary in to track and modulate
control performance. Therefore, monitoring should have a medi-
ating affect on the transfer of control skills to new goals. In
contrast, dissociationists claim that procedural—not declarative—
knowledge is necessary in the acquisition of control behaviors.
Thus monitoring should have a detrimental affect on the transfer-
ability of control behaviors in CDC-tasks. To understand how
monitoring influences the transfer of control skills to analogous
CDC-tasks, the present study asks the following questions: (a)

Does control performance improve if monitoring is based on one’s
prior self-generated behavior rather than the behavior of another
individual? (b) Can people discriminate between their own self-
generated behavior and that of another individual? (c) Is control
performance improved if monitoring of self-generated or other-
generated behaviors occurs online rather than via observation? (d)
Can indices of monitoring behavior accurately predict the trans-
ferability of control behaviors in a complex skill learning task?

General Method

In the following four experiments, participants performed two
problem-solving tasks, each consisting of a learning phase and a
test phase. All participants solved the first problem in the same
way by completing the learning and test phase, and in each
experiment the critical manipulation concerned the contents of the
learning phase in the second problem (i.e., self conditions, other
conditions). In conditions labeled self, participants in the second
problem were exposed to their own learning phase from the first
problem. In conditions labeled other, participants were yoked to a
participant in the corresponding self condition and in the second
problem were exposed to that individual’s learning phase. In
addition, the presentation format of the learning phase in the
second problem was varied: that is, it was either action-based
(Experiments 1, 2, and 3) or observation-based (Experiments 1 and
4), and the cover story was manipulated so that the second problem
was either different to the first (Experiment 1 and 4) or identical
(Experiment 2 and 3). A further manipulation concerned the in-
structions presented prior to the presentation of the second prob-
lem (Experiment 2 and 3).

Experiment 1

Experiment 1 included four conditions. In each, participants
solved two CDC-task problems. All participants solved the first
problem in the same way, by generating their own learning expe-
rience in the learning phase. However, in the second problem, half
the participants reexperienced their original learning phase from
the first problem, through either observation-based (observe-self)
or action-based (act-on-self) learning. The remainder experienced
a different learning phase from their own, through either
observation-based (observe-other) or action-based (act-on-other)
learning.

Dissociationists (e.g., Berry, 1991; Berry & Broadbent, 1988;
Lee, 1995; Sun, Merril, & Peterson, 2001) propose that only
procedural processes are necessary in the acquisition and transfer
of knowledge in CDC-tasks. Therefore, in Experiment 1, transfer
of control performance should be facilitated if the learning phase
of the first and second problems is procedural-based (act-on-self,
act-on-other), and performance should increase across problems.
Additionally, decrements in control performance should be found
in conditions in which the learning formats of the first and second
problems are different (observe-self, observe-other) because de-
clarative knowledge is brought to bear during observation-based
learning and invokes monitoring behaviors, which interfere with
procedural processes (Berry, 1991; Berry & Broadbent, 1987) and
thus prevent transfer of control skills.

If, however, consistent with social cognitive theory and dual-
space theory, monitoring mediates control behaviors, transfer of

99POSITIVE TRANSFER AND NEGATIVE TRANSFER

control behaviors should be facilitated, whatever the presentation
format of the learning phases. If monitoring is involved, then,
during the learning phase, people will be sensitive to the kind of
information presented (i.e., the source of the second learning
phase), not its presentation format (observation-based, action-
based). In this case, participants will demonstrate knowledge of the
difference in the source of the second learning phase.

Method

Seventy-two graduate and undergraduate students from Univer-
sity College London volunteered to participate in the experiment
and were paid £6 (approximately U.S.$12.18). Participants were
aged between 19 and 35, and 48 were women. Participants were
randomly allocated to one of four conditions (observe-self, act-on-
self, observe-other, act-on-other), with 18 in each. Participants
were tested individually.

Design and Materials

Experiment 1 was a mixed design that included two between-
subjects variables comparing reexposure to self-generated learning
instances, exposure to other-generated learning instances (i.e., self
vs. other), and the effects of learning format on transfer of control
performance (observation, action). Two within-subject variables
examined transfer of skill across two CDC-tasks, one measuring
control performance in two tests (Tests 1–2), the other measuring
structural knowledge in four tests (Structure Tests 1– 4). The
of presentation of the two CDC-tasks was randomized for each
participant. The critical manipulation was the contents of the
second learning phase. In the first problem, all participants gener-
ated their own learning experiences. In the second, half the par-
ticipants reexperienced their original learning phase (observe-self,
act-on-self), and the other half experienced the learning phase
generated by another participant (observe-other, act-on-other). Full
details are provided in the procedure section.

CDC-Tasks

The design and underlying structure of the two CDC-tasks used
(watertank control system, ghost hunting control system) were
based on the water tank system (see Figure 1). The only differ-
ences between the two problems were the visual layout of each
system on the screen and the cover story (see the Appendix). In the
water tank control system, participants were told that, as workers
of the plant, their job was to inspect the water quality of the
system. The system was operated by varying the different levels of
salt, carbon, and lime (inputs), which then changed the three water
quality indicators: oxygenation, temperature, and chlorine concen-
tration (outputs). Participants controlling the system had to reach
specific values of the water quality indicators. In the ghost hunting
control system, participants were told that they were newly re-
cruited ghost hunters and had just returned from a field experi-
ment. Their job was to examine three pieces of equipment used in
the field (GGH meter, anemometer, trifield meter; inputs) and the
readouts of the three phenomena that these detect (electromagnetic
waves, radio waves, air pressure; outputs). Controlling the system
involved modifying the levels of the readouts of the phenomena by
manipulating the dials on each machine.

Procedure

First problem: Learning phase. In the learning phase of the
first problem, participants were presented with a computer display
with three input and three output variables. Each trial consisted of
participants interacting with the system by changing any input by
any value they chose by using a slider corresponding to each.1

Each slider had a scale from �100 to 100 units. When participants
were satisfied with their changes to the inputs, they clicked a
button labeled output readings, which revealed the values of all
three outputs. When they were ready to start the next trial, they
clicked a button labeled next trial, which hid the output values
from view. On the next trial, the newly changed inputs affected the
output values from the previous trial: Thus, the effects on the
outputs were cumulative from one trial to the next.2 After the first
block of six trials, participants were presented with Structure Test
1. A diagram of the system was shown on screen, and participants
were asked to indicate which input was connected to which output.
After this, participants began the next set of six trials.3 On com-
pletion of the second block, Structure Test 2 was presented. The
inputs that changed on each trial, the values they were changed by,
and the corresponding effects on the outputs comprised the trial
history of each participant.

Test phase of both problems (Test 1 and Test 2). After the
learning phase, participants’ ability to control the system was
tested (Tests 1 and 2). In this phase, all participants had to change
the input values to achieve and maintain set output values. In the
first and second problems, for the course of six trials, the criterion
values participants had to reach in Test 1 were the same, and only
the labels of the outputs were different: Output 1 (Water Tank �
Oxygenation; Ghost Hunt � Radio Waves) � 50; Output 2 (Water
Tank � Chlorine Concentration; Ghost Hunt � Electromagnetic
Waves) � 700; Output 3 (Water Tank � Temperature; Ghost
Hunt � Air Pressure) � 900 for the course of six trials. On
completing Test 1, participants were presented with Structure Test
3 and the second test. In Test 2, the criterion values they had to
achieve were Output 1 � 250, Output 2 � 350, and Output 3 �

1 In Burns and Vollmeyer’s (2002) study, participants were shown the
starting values of input and output values before they began the task. In the
present experiment, participants were shown only the starting values of the
input values, and not the output values, which were revealed only on the
first trial. The rationale for this change was simply to encourage partici-
pants to pay special attention to the effects on the outputs resulting from the
manipulations they made.

2 If a participant changed the input salt by 50 units on Trial 1, this would
in turn change the output value of chlorine concentration to 556 (i.e.,
Chlorine Concentration [starting value] � 500 units � Salt [value change]
� 50 units � Constant [added noise on input– output connection] � 6
units). If on Trial 2 the input salt was changed by 100 units, then the output
value of chlorine concentration would be 662 (i.e., Chlorine Concentration
[starting value] � 556 units � Salt [value change] � 100 units � Constant
[added noise on input– output connection] � 6 units).

3 For each problem at the start of each block of the learning phase and
at the beginning of each test, the input values were set to 0, and the output
levels were set as follows: Output 1 (Water Tank � Oxygenation; Ghost
Hunting � Radio Waves) � 100; Output 2 (Water Tank � Chlorine
Concentration; Ghost Hunting � Electromagnetic Waves) � 500; Output
3 (Water Tank � Temperature; Ghost Hunting � Air Pressure) � 1,000.

100 OSMAN

1,100 for the course of six trials. Participants were then presented
with Structure Test 4.

Second problem: Observation-based learning phase. In the
second problem, the learning phase was observation-based for half
the participants. Instead of changing the inputs, on each trial
participants pressed a button labeled reveal inputs and then ob-
served the sliders of the inputs changing automatically according
to prespecified values. Then they pressed a button labeled reveal
outputs, which displayed the corresponding effects on the output
values. After studying them, participants clicked a button ready for
next trial, which cleared the input and output values ready for the
next trial. As in the first problem, after Trials 6 and 12 participants
were presented with a structure test. The observe-self condition
watched their own trial history, which they had generated from the
first problem; the observe-other condition observed the trial his-
tory of a participant from the observe-self condition. For example,
in the first learning phase, Participant A from the observe-self
condition changed Input 1 on Trial 1 by 50 units. In the second
learning phase, the observe-self condition now watches Input 1
change by 50 units on Trial 1. In the first learning phase, Partic-
ipant B from the observe-other condition changed Input 2 on Trial
1 by 70 units. The observe-other condition is randomly allocated
the trial history of Participant A, and so in the second learning
phase, they simply observe Input 1 change on Trial 1 by 50 units.4

Second problem: Action-based learning phase. For the re-
maining participants, the second learning phase was action-based.
At the start of the learning phase, the act-on-self condition was
presented with a trial history sheet listing the inputs changed and
the values they were changed by for each of 12 trials. The act-on-
self condition was instructed to interact with the system on each
trial by making the changes listed on the record sheet. They were
thus mimicking the learning behaviors from the first learning
phase. The procedure was the same for the act-on-other condition
except that they were randomly allocated the trial history of a
participant from the act-on-self condition.

Posttest question. After completing the experiment, partici-
pants were informed of the manipulation to the second learning
phase and were asked which of the two (i.e., self or other) trial
histories they were exposed to. This question served as an index of
self-insight.

Scoring

Structure scores. The method used to score performance on
Structure Tests 1– 4 computed the proportion of input– output links
correctly identified for each test. A correction for guessing was
incorporated, based on Vollmeyer et al.’s (1996) procedure, which
is correct responses (i.e., the number of correct links included and
incorrect links avoided)—incorrect responses (i.e., the number of
incorrect links included and correct links avoided) � N (the total
number of links that can be made). The maximum value for each
structure score was 10. This scoring scheme was applied to score
performance on all structure tests in Experiments 1– 4. Successful
performance is indicated by an increase in structure scores.

Tests 1 and 2. The procedure used in Experiments 1– 4 was
based on Vollmeyer et al.’s (1996) scoring system. Control per-
formance was measured as error scores in Tests 1–2. Error scores
were based on calculating the difference between each …

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code HAPPY