a research-based critical review

Basically, all my instruction are listed in the attachment file, however, here are what i expected you to help me with.This assignment weight a lot in this semester, in order to help us, our professor divided the whole essay into three parts before the final draft, first one was introduction, second one was literature review and third part is discussion. I hope you can help me to follow our professor’s instruction step by step so that i can get a good grade.Firstly, the first part of my essay which was the introduction, the research question of my essay has been changed, and i wanted you to fix my thesis statement because my professor thought it was very similar to the original article. Secondly, the two articles i found did not meet our professor’s requirements, because one of them is a literature review of somebody else’s work, the other one is kind of like the point form article, he expected us to do our own research from the some sort of 10 or 20 pages’s articles, to read and review those in order to pull out some evidence to support my own paper. So I need you to help me to redo my literature review part with two related, long journal articles you found with proper citation in MLA format. Thirdly, i will have my discussion part’s instruction by the end on Tuesday, that draft which was due on Thursday, i hope you can write that discussion for me by Thursday 4 pm. Thanks for taking your time to read my annoying instruction, this assignment is very important for me, i hope you can help me out.
online_deception_in_social_media__2_.pdf

introduction_draft_1.docx

Don't use plagiarized sources. Get Your Custom Essay on
a research-based critical review
Just from $13/Page
Order Essay

online_deception_draft_2.docx

instrction.doc

Unformatted Attachment Preview

contributed articles
The unknown and the invisible exploit the
unwary and the uninformed for illicit financial
gain and reputation damage.
BY MICHAIL TSIKERDEKIS AND SHERALI ZEADALLY
Online
Deception in
Social Media
technologies has
revolutionized the way content is generated and
exchanged through the Internet, leading to proliferation
of social-media applications and services. Social media
enable creation and exchange of user-generated content
and design of a range of Internet-based applications.
This growth is fueled not only by more services but also
by the rate of their adoption by users. From 2005 to
2013, users and developers alike saw a 64% increase in
the number of people using social media;1 for instance,
Twitter use increased 10% from 2010 to 2013, and 1.2
billion users connected in 2013 through Facebook and
Twitter accounts.24 However, the ease of getting an
account also makes it easy for individuals to deceive
one another. Previous work on deception found that
people in general lie routinely, and several efforts have
sought to detect and understand deception.20 Deception
has been used in various contexts throughout human
history (such as in World War II and the Trojan War) to
P R O L I F E R AT I O N O F W E B – B A S E D
72
COMM UNICATIO NS O F THE ACM
| S EPTEM BER 201 4 | VO L . 5 7 | N O. 9
enhance attackers’ tactics. Social media provide new environments and
technologies for potential deceivers.
There are many examples of people
being deceived through social media,
with some suffering devastating consequences to their personal lives.
Here, we consider deception as a
deliberate act intended to mislead others, while targets are not aware or do
not expect such acts might be taking
place and where the deceiver aims to
transfer a false belief to the deceived.2,9
This view is particularly relevant when
examining social media services where
the boundary between protecting
one’s privacy and deceiving others is
not morally clear. Moreover, such false
beliefs are communicated verbally and
non-verbally,14 with deception identifiable through cues, including verbal
(such as audio and text), non-verbal
(such as body movement), and physiological (such as heartbeat).
Training and raising awareness
(such as might be taught to security
personnel17) could help protect users of social media. However, people
trained to detect deception sometimes
perform worse in detection accuracy
than people who are not trained,17 and
evidence of a “privacy paradox” points
to individuals sharing detailed information, even though they are aware of
privacy concerns,26 making them more
vulnerable to attack. Making things
worse, social media, as a set of Internet-based applications, can be broadly
defined as including multiple virtual
environments.15,16
key insights
I n social media, deception can involve
content, sender, and communication
channel or all three together.
T he nature of a social medium can
influence the likelihood of deception and
its success for each deception technique.
D eception detection and prevention
are complicated by lack of standard
online deception detection, of a
computationally efficient method for
detecting deception in large online
communities, and of social media
developers looking to prevent deception.
PHOTOGRA PH BY NOM AD SOUL
DOI:10.1145/ 2629612
CRED IT T K
SE PT E MB E R 2 0 1 4 | VO L. 57 | N O. 9 | C OM M U N IC AT ION S OF T HE ACM
73
contributed articles
Exploring deception in social media, we focus on motivations and techniques used and their effect on potential targets, as well as on some of the
challenges that need to be addressed
to help potential targets detect deception. While detecting and preventing
deception are important aspects of social awareness relating to deception,
understanding online deception and
classifying techniques used in social
media is the first step toward sharpening one’s defenses.
Online Deception
Nature often favors deception as a
mechanism for gaining a strategic advantage in all kinds of biological relationships; for example, viceroy butterflies deceive birds by looking like
monarch butterflies (which have a bitter taste), ensuring their survival as long
as there are not too many in a particular
area.8 Similarly, humans have long used
deception against fellow humans.3 In
warfare, Chinese military strategist and
philosopher Sun Tzu29 famously said,
“All warfare is based on deception.”
Social media services are generally
classified based on social presence/
media richness and self-representation/self-disclosure.16 Social presence
can also be influenced by the intimacy and immediacy of the medium
in which communication takes place;
media richness describes the amount
of information that can be transmitted
at a given moment. Self-representation
determines the control users have representing themselves, whereas selfdisclosure defines whether one reveals
information, willingly or unwillingly.
Using these characteristics, Kaplan
and Haenlein16 developed a table including multiple aspects of social media: blogs, collaborative projects (such
as Wikipedia), social networking sites
(such as Facebook), content communities (such as YouTube), virtual social worlds (such as Second Life), and
virtual game worlds (such as World of
Warcraft). Table 1 outlines an expanded classification of social media that
also includes microblogging (such as
Twitter) and social news sites (such as
Reddit). We categorize microblogging
between blogs and social networking
sites15 and social news sites above microblogging, given their similarity to
microblogging in terms of social pres74
COM MUNICATIO NS O F TH E AC M
Social media
provide an
environment in
which assessment
signals are neither
required nor the
norm, making
deception easy;
for instance, gender
switching online
may require only
a name change.
| S EPTEM BER 201 4 | VO L . 5 7 | N O. 9
ence/media richness (limited content
communicated through the medium
and average immediacy as news comes
in) and their low self-presentation/selfdisclosure due to their nature as content-oriented communities.
Social media that give users freedom to define themselves are in the
second row of Table 1, and social media that force users to adapt to certain
roles or have no option for disclosing
parts of their identities are in the first
row. Moreover, along with increased
media richness and social presence,
we note a transition from social media
using just text for communication to
rich media simulating the real world
through verbal and non-verbal signals,
as well as greater immediacy in virtual
game worlds and virtual social communication. The differences between
these types of social media affect how
deception is implemented and its usefulness in deceiving fellow users.
In most social media platforms,
communication is generally text-based
and asynchronous, giving deceivers an
advantage for altering content—an inexpensive way to deceive others. Zahavi31 identified the difference between
assessment signals that are reliable
and difficult to fake and conventional
signals that are easier to fake; for example, in the real world, if older people
want to pass as younger, they might
dress differently or dye their hair to
produce conventional signals. However, it would be much more difficult to
fake a driver’s license or other authentic documentation. But social media
provide an environment in which assessment signals are neither required
nor the norm, making deception easy;
for instance, gender switching online
may require only a name change.
Difficulty Perpetrating
Online Deception
The level of difficulty perpetrating online deception is determined by several factors associated with the deceiver,
the social media service, the deceptive
act, and the potential victim. Significant difficulty could deter potential
deceivers, and lack of difficulty may
be seen as an opportunity to deceive
others (see Figure 1).
The deceiver. Several factors associated with deceivers determine the difficulty of trying to perpetrate online
contributed articles
deception, including expectations,
goals, motivations, relationship with
the target, and the target’s degree of
suspicion.2 Expectation is a factor that
determines the likelihood of success
in deception. More complex messages
have a greater likelihood of being communicated.20 Goals and motivations
also determine the difficulty of perpetrating a deception. Goals are broader
and longer term, and motivations consist of specific short-term objectives
that directly influence the choice and
type of deception. A taxonomy developed by Buller and Burgoon2 described
three motivators for deception: “instrumental,” where the would-be deceiver
can identify goal-oriented deception
(such as lying about one’s résumé on
a social medium to increase the likelihood of more job offers); “relational,”
or social capital (such as aiming to
preserve social relationships typical in
online social networks);26 and “identity” (such as preserving one’s reputation from shameful events in an online profile). These motivators in turn
determine the cost or level of difficulty
to deceivers in trying to deceive; for
example, deceivers motivated to fake
their identity must exert more effort
offline due to the presence of signals
much more difficult to fake than online where many identity-based clues
(such as gender and age) may take the
form of conventional signals (such as
adding information to one’s profile
page without verification). Difficulty
perpetrating a deception is also determined by the deceiver’s relationship to
a target. Familiarity with a target and
the target’s close social network make
it easier to gain trust and reduce the
difficulty of perpetrating deception.
Many users assume enhanced security
comes with technology so are more
likely to trust others online.4 Moreover,
the level of trust individuals afford a
deceiver also reduces their suspicion
toward the deceiver, thereby increasing the likelihood of being deceived.
Moral cost also increases the difficulty of perpetrating deception.26
Moral values and feelings can influence what deceivers view as immoral in
withholding information or even lying.
In the real world, the immediacy of interaction may make it much more difficult to deceive for some individuals. In
contrast, in the online world, distance
and anonymity28 contribute to a loss of
inhibition; the moral cost is thus lower
for deceivers.
Social media. Social media require
potential targets and would-be deceivers alike to expand their perspective on
how interactions are viewed between
receiver and sender during deception;
for instance, “interpersonal deception
theory”2 says the interaction between
a sender and a receiver is a game of iterative scanning and adjustment to ensure deception success.
Donath8 suggested that if deception is prevalent in a system (such as
Facebook) then the likelihood of successful deception is reduced. It makes
sense that the prevalence of deception
in an online community is a factor
that also determines difficulty perpetrating deception. Social media services that encounter too much deception
will inevitably yield communities that
are more suspicious. Such community
suspicion will increase the number of
failed attempts at deception. Moreover, increasing a potential target’s
suspicion will likewise increase the
difficulty, thereby deterring deceivers
from entering the community in the
first place, though some equilibrium
may eventually be reached. However,
this rationale suggests communities
without much deception are likely
more vulnerable to attacks since suspicion by potential victims is low. Determining the prevalence of deception
in a community is a challenge.
Similarly, the underlying software
design of social media can also affect
the degree of suspicion; the level of
perceived security by potential victims
increases the likelihood of success for
would-be deceivers.11 Software design
can cause users to make several assumptions about the level of security
being provided. Some aspects of the
design can make them more relaxed
and less aware of the potential signs of
being deceived; for example, potential
Table 1. Social media classifications.
Social presence/Media richness
Low
Self-presentation/Self-disclosure
High
Low Collaborative Social
projects
news sites
High Blogs
Content
communities
Virtual game
worlds
Microblogging Social networking Virtual social
sites
worlds
Figure 1. Entities and participants involved in online deception.
SE PT E MB E R 2 0 1 4 | VO L. 57 | N O. 9 | C OM M U N IC AT ION S OF T HE ACM
75
contributed articles
Figure 2. Interaction without and with deception.
(a)
(b)
targets may falsely assume that faking
profile information on a social networking site is difficult due to multiple
verification methods (such as email
confirmation). Moreover, a system’s
assurance and trust mechanisms determine the level of trust between
sender and receiver.11 Assurance mechanisms can either reduce the probability of successful deception or increase
the penalty for deceivers.11 A tough
penalty means increased difficulty for
deceivers, especially when the chances
of being caught are high. Assurance
mechanisms are considered effective
in certain contexts where the need for
trust may be completely diminished. In
social media, assurance mechanisms
are much more difficult to implement,
penalties and the chances of being
caught may be or seem to be lower than
those in offline settings, and the cost of
deception is much lower. Media richness is another factor determining difficulty perpetrating deception. In this
context, Galanxhi and Nah10 found deceivers in cyberspace feel more stress
when communicating with their victims through text rather than through
avatar-supported chat.
Deceptive acts. Time constraints
and the number of targets also help determine the difficulty perpetrating online deception. The time available and
76
COMM UNICATIO NS O F THE AC M
the time required for a successful attack are important, especially in social
media services involving asynchronous
communication. Moreover, the time
required for deception to be detected
also determines the effectiveness of
the deception method being used. For
instances where deception must never
be discovered, the cost of implementing a deception method may outweigh
any potential benefit, especially when
the penalty is high. The social space
in which deception is applied and the
number of online user targets who are
to be deceived help determine the level
of difficulty implementing a deception
method; for example, in the case of politicians trying to deceive through their
online social media profiles, all potential voters face a more difficult challenge deciding how to vote compared
to deceivers targeting just a single voter.
Type of deception is another important
factor. Complex deceptive acts motivated by multiple objectives (such as faking an identity to manipulate targets
into actions that serve the deceiver’s
goals) are more difficult to perpetrate.
Potential victim. In real-world offline settings, the potential target’s
ability to detect deception may be a
factor determining the difficulty perpetrating deception; for example, in
a 2000 study of Internet fraud using
| S EPTEM BER 201 4 | VO L . 5 7 | N O. 9
page-jacking techniques, even experienced users of social media failed to
detect inconsistencies, except for a select few who did detect it, thus showing
detection is not impossible.11 In social
media, the potential targets’ ability to
detect deception also depends to some
extent on their literacy in information
communication technology. Deceivers
must therefore evaluate the technology
literacy of their potential victims. Users with high technology literacy have
a significant advantage over casual Internet users, so the cost to a deceiver
as calculated through a cost-benefit
analysis for a social engineering attack
may be higher.
Deception Techniques
Various techniques are reported in
the literature for deceiving others in
social media environments, including
bluffs, mimicry (such as mimicking a
website), fakery (such as establishing
a fake website), white lies, evasions,
exaggeration, webpage redirections
(such as misleading someone to a false
profile page), and concealment (such
as withholding information from one’s
profile).21 We use the communication
model proposed by Madhusudan20 to
classify deception techniques for social media and evaluate their effectiveness in achieving deception.
contributed articles
Deception model. The model (see
Figure 2) consists of a sender (S), the
content or message (I), the channel
through which communication takes
place (C), and the receiver (R). If a receiver’s expected model (the so-called
SIC triangle) is different from the received model (any or all SIC elements
have been altered) then deception has
occurred. This is also in line with Ekman’s definition9 of deception, saying
a receiver cannot anticipate deception
for deception to be considered deception. Deception is perpetrated by manipulating any of the SIC elements or
any combination thereof. We present
in the following paragraphs an overview of social media and identify factors and social-media types where
deception can be perpetrated with
minimal effort at low cost, resulting in
a fairly high deception success rate (see
Table 2). We identified these factors
from the literature.
Content deception. Manipulating
content, as in falsifying information,
is presumably the most common way
to deceive others. Social media that
focus primarily on content (such
as blogs, microblogs, content communities, and social news sites) are
highly susceptible to such deception.
Technology allows anyone with access privileges (legitimate and illegitimate) to manipulate multimedia
files to an extraordinary degree. Tampering with images23 is an effective
way to fake content (such as representing that one traveled around the
world through one’s photos, altering
them and sharing them through social media). Such a scheme may help
deceivers elevate their social status
and win a victim’s trust to obtain further information. In addition to videos and images, the ease of manipulating content that is at times based
on text alone yields low-cost deception and high probability of success
due to the targets’ low information
literacy and lack of expectation for
verifiability and even accountability.
In addition, social media (such as social network sites and virtual social
worlds) offering profile management
for users are also susceptible, especially when advertising emphasizes
the promise of new relationships.
Competent deceivers may thus have a
substantial advantage.
Collaborative projects (such as
Wikipedia) are less likely to be affected by deception, or manipulating (I).
The difficulty in perpetrating deception may seem low, but the likelihood
of success (at least over the long term)
is also low. This trade-off is due to the
software design of these types of social
media, where many-to-many communication enables many people to see
the content. We see examples of content deception in Wikipedia, where not
only vandals (people altering content
with intent to deceive others) are eventually detected but other people assume a role in fighting them.25 Furthermore, assurance mechanisms (such
as a requirement for content validity,
tracing content back to its source) are
built into the system to ensure content
deception is more apparent. Another
example of content deception in social
media involves open source software
managed by multiple users where it is
much more difficult to add malicious
content and perpetrate a deception
because multiple individuals evaluate
the code before it is released. Virtual
game worlds also have low probability
for deception due to strongly narrated
elements (such as being assigned specific roles that force players to follow a
specific course of action …
Purchase answer to see full
attachment

Order a unique copy of this paper
(550 words)

Approximate price: $22

Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency

Order your essay today and save 15% with the discount code ESSAYHELP