Tải bản đầy đủ - 0 (trang)
6 International Differences in Laws Related to Selection Use in Social Media

6 International Differences in Laws Related to Selection Use in Social Media

Tải bản đầy đủ - 0trang

278



G.B. Schmidt and K.W. O’Connor



there have been requests for more than 733,000 URLs to be removed, and Google

has agreed to the removal of approximately 238,000 URLs of the requested URLs

from search results (Lomas, 2015a). Data analysis by Reputation VIP, a reputation

management company that offers a service to help European citizens make such

requests, found that requests made through their service were rejected by Google

approximately 70 % of the time (Lomas, 2015b). So, currently it would seem the

majority of requests are being refused.

Reputation VIP recently released a report on what types of content links have

been removed from the 61,500 requests they have administered through their application. The most common categories of reasons for requests was “invasion of privacy” (58.7 %), which involves sites sharing things like personal addresses, religious

affiliations, or political party membership (Lomas, 2015b). From the selection perspective, some of this information would be irrelevant and other information could

lead to finding out individual characteristics that might run afoul of equal employment opportunities laws in the United States and elsewhere. The second most common category of reasons was “damage to reputation,” (11.2 %) which depending on

why it is damaging might be useful for organizations in screening out processes

(Lomas, 2015b). For example, an applicant being delinquent in payments for previous business dealings might be information an organization would want if it was

hiring someone to be responsible for a financial position.

The report also has categories of what types of sites the material was present on

when the request was accepted. The largest specific category was social media sites/

communities (20 %) with 6552 URL links removed from search engines (Lomas,

2015b). For organizations using search engines to find candidate information on

social media, these removals might be material that would have led to the candidate

being screened out.

One point of contention so far between Google and the European Union privacy

regulators has been whether web content search links should be deleted from only

European Union country-specific Google search engines sites or from the general

search engine site Google.com. The European Union regulators are currently arguing that removal needs to be for the general Google.com domain as well, otherwise

people in the European Union can circumvent the ruling by going to the general

Google.com domain for search (Lomas, 2015a). Google has refused to do so.

Removal from general Google.com search could have significant implications for

employers and individuals worldwide who are being able to access such information. So for now, on a practical level, organizations using search engines to find

social media material can use the general domains over country-specific domains

but the European Union’s stance is that this is a loophole that should be closed.

The “right to be forgotten” has clear implications for social media use in the

screening of candidates. If past indiscretions are removed from search results, they

likely will not appear in the screening processes done by organizations. For now,

loopholes exist through using other country-specific sites or international sites and

the majority of removal requests are being rejected by Google. This could certainly

change, however, and the intent of the European Union regulators seems so far to be

that these “loopholes” need to be closed. As currently applied, there has not been



13



Legal Concerns When Considering Social Media Data in Selection



279



any discussion of whether organizations should be able to use such removed material if they use a loophole. Making use illegal regardless of how it is discovered

could certainly be a future way the “right to be forgotten” would be applied.

While this European Union ruling has not been applied to international social

media cases, it could certainly have future implications as well as potentially act as

a model for laws passed in other countries or even in states in the United States. The

ruling is likely to have more far-reaching consequence as time passes and more content is created and requested to be removed.



13.6.2



China Internet Censorship Laws



In the case of China, the biggest challenge to using social media data in the selection

process is the significant legal restrictions related to Internet use and what sites can

be visited. Starting in July 2009, Facebook and Twitter have been permanently

blocked within China’s borders and use by Chinese nationals elsewhere is also illegal (The Economist, 2013). The overall program of censorship and Internet restrictions has been called by some “The Great Firewall,” although China has a complex

web of laws and tools that involve both public and private sector operations (The

Economist, 2013).

In 2010, a white paper from the Chinese government offered an idea of “Internet

sovereignty” that all Internet users in China, both citizens and foreigners, were

required to abide by Chinese laws and regulations. Chinese Internet companies have

also been required to sign a pledge of self-regulation and professional ethics that

built off the ideas of the white paper (Xu, 2015).

Much of the enforcement and censorship is done through what is called the

“Golden Shield Project.” Through the Golden Shield project, the Chinese government takes actions like restricting bandwidth, filtering search results by keywords

that are seen as not in the best interest of China, and blocking access to certain

websites such as the previously mentioned Facebook and Twitter (Xu, 2015). Site

access can also be blocked for short-term periods such as during politically sensitive

events and anniversaries. Chinese Internet users can attempt to use various circumvention applications and other means (ex. virtual private networks) to view blocked

websites, although the Chinese censorship enforcers are consistently working to

shut down any such workarounds (The Economist, 2013).

A more recent believed addition to China’s censorship program has been dubbed

“The Great Cannon.” The Great Cannon acts as a middle man for web traffic such

that, when a user in China tries to go to or search for a website, the program hijacks

traffic to that individual website IP address and replaces normal benign web content

with malicious malware content (Weaver, 2015). The malware installed then is used

to make that user’s computer an unwilling participant of denial of service attacks

against a website. The websites attacked to date have both been websites with pages

related to getting around the Chinese governments censorship applications, the

New York Times’ Chinese mirror site, and the anti-censorship organization



280



G.B. Schmidt and K.W. O’Connor



GreatFire.org. The Chinese government has not publicly acknowledged the existence of the Great Cannon or their use of it, although the analysis of Marzak et al.

(2015) shows strong support for such a connection, including evidence of its colocation with the Great Shield Project servers.

Marzak et al. (2015) looked at the nature of the Great Cannon soon after its first

identified action in March 2015. Their technical report goes into great detail on the

technical aspects of this firewall application and evidence that it originates from the

Chinese government. One of the most serious aspects of their report is describing

how the Great Cannon could be deployed in powerful ways with small technical and

software changes. Instead of targeting Internet users going to particular websites,

the program could be used to target particular Internet users. In such a case, the

Great Canon could target particular users who access even one unencrypted site that

uses a server in China for the website or the ads on that site and deliver malicious

malware to that person’s computer. A user very well might not realize they are

accessing a site using a server in China (Marczak et al., 2015). For a person within

China, it would be very difficult to get on the Internet without interacting in some

way with an unencrypted website or web content that would be vulnerable to the

Great Cannon.

Another potential application noted by Marzak et al. (2015) is the widespread

interception of all unencrypted emails within and to those in China or using Chinabased servers and the potential addition of malicious attachments to such emails.

Email could be manipulated using the Great Cannon to spread malware to support

future Denial of Service attacks or other actions. While there is no evidence of such

use currently, Marzak et al. (2015) suggest that the existing architecture of the

Great Cannon could support such uses with relatively small technological

modifications.

China’s extensive censorship system has major implications for organizations

using social media in selection. One of the most basic logistic issues is that Chinese

citizens who are applicants are unlikely to have common social media presences

with Facebook and Twitter blocked and LinkedIn available but open to government

censorship (Newman, 2014). This means that organizations will need to look at

Chinese social media sites instead for information, potentially hurting comparability to candidates in others parts of the world. With the extensive censorship laws and

enforcement, Chinese citizen workers are less likely to openly share information

through social media sites, which means social media data may not exist for the type

of behaviors organizations are using for screening out procedures.

The use of social media in the selection processes is even more complicated if

the evaluator is located in China, which could certainly happen with multinational

organizations. The Internet restrictions apply to all people in China regardless of

citizenship (The Economist, 2013). This means that, from a practical perspective,

accessing applicant’s social media profiles on major sites like Facebook or Twitter

while the evaluator is located in China would be extremely difficult. Even with

workaround methods, the examination of social media content on such banned sites

would be illegal and could result in legal consequences for the employee who uses

them and the company that has asked him or her to do so. Thus, in the China context,



13



Legal Concerns When Considering Social Media Data in Selection



281



social media use in selection processes might not be practical due to the existing

censor-related laws and enforcement.

The Great Cannon offers a different challenge to organizations operating in

China or interacting with individuals in China. The security of organizational data

could be compromised by The Great Cannon. Searches for relevant candidate

information could lead to malware being downloaded on organization’s computers

and networks, spreading beyond computers in China to other countries and users. In

theory, this could lead to confidential information being spread in a way similar to

what was seen in the Sony Pictures hack (Elkind, 2015). Confidential information

release could also lead to lawsuits and lost customers. To deal with such potential

threats, organizations need to consider implementing measures that make basic

online practices and email systems encrypted and improve the overall security of

their online systems. Unfortunately, there are currently no easy solutions to implement to resolve all the issues inherent in programs such as the Great Cannon.



13.6.3



United Arab Emirates Defamation Laws



The severity of United Arab Emirates defamation laws and how they are applied to

social media content was highlighted internationally in the legal case of Ryan Pate,

an American working for an UAE company who posted on Facebook criticism

about his company. While the Facebook post was made, Pate was in the United

States; upon returning to the UAE, Pate was arrested and faced a criminal charge of

defamation against his employer punishable by up to 5 years in jail and a fifty thousand dollar fine (Altman, 2015). He ultimately was able to get the charges dropped

due to intervention by his United States Congressman and the US State Department

(Ingles, 2015). Most workers, and certainly those who are citizens of the UAE, are

unlikely to be that lucky.

This criminal charge was due to existing severe defamation rules within the

United Arab Emirates. Defamation that causes harm to the person the statement is

about in any oral or written form is a criminal offense. This is quite broadly defined

and just the presence of criticism may be enough to violate these laws (Kelly &

Proctor, 2012). If the person’s statements are given on behalf of or seen to represent

the organization, the person’s manager could be similarly charged. Company computers or devices can also be accessed in the investigation of such charges (Kelly &

Proctor, 2012).

These strict and broad defamation laws offer both practical and legal consequences for organizations using social media in the selection processes. From a

practical standpoint, more strict standards for screening out candidates may be warranted. An applicant who defames a previous employer, complains about an experience as a customer, or even makes a political statement may be in danger of

defamation charges and the accompanying imprisonment. Such defamation behavior by an employee could lead to authorities accessing company devices and the

person’s manager potentially being arrested as well. Organizations may want to take



282



G.B. Schmidt and K.W. O’Connor



extra efforts to avoid such situations and rigorous social media screening could help

this goal. Of course, making generally more strict standards of social media screening may be more likely to lead to lawsuits or perceived unfairness by candidates in

other countries without such defamation laws.



13.6.4



Overall Practical Legal Guidelines for Social Media

Selection Internationally



When considering social media selection internationally, deep understanding of

current national and international laws and how they are applied is essential. We

highlighted three examples above, but there are many other examples that could at

least potentially impact the social media selection process. Organizations need to

review existing social media-related case law and existing selection-related case law

for all nations they have workers in.

It must be noted that it is possible, and perhaps even likely, that laws of one

nation will conflict with those of others with regard to how social media use in

selection could proceed. Different countries can make different decisions on how

privacy, national interests, and business interests should be balanced. As such, universal procedures and implementation for using social media data in selection internationally may not be practical or legally possible. Such a state of affairs may mean

organizations need general guidelines for social media data use that can be modified

by each particular country’s legal context.



13.7



Overall Practical Legal Guidelines for Social Media

Data Use in Selection



For organizations, it is important to consider how they can reduce legal risks related

to using social media data in selection. Based on existing laws, there are some general actions organizations can take in engaging in social media date use for selection

that should help reduce legal risks. One of the most importance aspects is to create

clear procedures, standards, and criteria for how social media data will be considered. As noted in Williams et al. (2013), inconsistent hiring processes accounted for

22 % of legal cases and problematic criteria for 17 % of legal cases in their sample.

Clear procedures for use are crucial for those in HR doing the searches to engage in

them in a consistent manner. If they are inconsistent, there is potential for the use to

not only be ineffective but also to allow the biases of the employee doing the search

to effect decisions, leaving the organization open to lawsuits related to bias or

favoritism.

Organizations need to clearly define what criteria they are using and why they

are being used. This is especially true as social media and online-related matters are

generally new territory for the United State legal system. As such, judges and juries



13



Legal Concerns When Considering Social Media Data in Selection



283



may not have significant experience with this selection method and could see it as

potentially suspect. The tool of social media data itself may be seen by some (i.e.,

judges or applicants) as unfair in the selection context. As such, organizations need

to clearly define what criteria they have in examining social media, and in the best

case scenario, have validation study data on its effectiveness in predicting workplace outcomes. To date, validation data on the effectiveness of social media use for

selection in the academic realm has been mixed (Kleumper et al., 2012; Stoughton,

Thompson, & Meade, 2013; Van Iddekinge, Lanivich, Roth, & Junco, 2013). We

recommend that organizations perform validation studies for their own use of social

media data in screening processes and the employee outcomes the social media data

predicts.

With potential concerns that examining applicant’s social media data will reveal

protected class information that could lead to charges of discrimination against

organizations, organizations may consider measures to limit such risks. One such

way to do this would be to decouple who collects social media screening information from who makes employment decisions. In this case, one person would be in

charge of collecting social media data and then handing over only relevant information to the person making the selection decision, removing all protected class

information found in social media content.

LinkedIn allows user profiles to be downloaded in pdf form in a stripped-down

version that does not include pictures or post history. Having one HR worker download the profiles to pdf and provide them to the person making employment decisions would remove potential protected class information that could be discovered

through the profile picture and an examination of individual LinkedIn posts. This

may not help for age-related discrimination claims, however, as information is provided about college years and years in the workplace. While no actual birth date is

provided, an approximate one could be estimated using employment and education

dates. This ability to estimate the age of candidate was part of the Shoun v. Best

Formed Plastics (2016) case mentioned earlier in this chapter. LinkedIn profile pdf

download would not have alleviated the issue alleged in that case. Thus, downloading pdfs of LinkedIn profiles would alleviate some, not all concerns. Also important

is that other sites that might be looked at, such as Facebook or Twitter, lack this

feature and thus would need more manual (or application based) scrubbing of protected class information. Organizations might consider creating in-depth and clear

procedures of how social media data from sites examined would have protected

class information removed before the relevant social media data is passed onto

evaluators.

Companies may also consider having a third party vendor do the social media

data collection process. One example of a company doing this is Inquirehire (http://

inquirehire.com/services/social-media-screening). As noted by Morgan and Davis

(2013), however, third party vendors doing such screening might be considered

“consumer reporting agencies” under the Fair Credit Reporting Act and be subject

to restrictions based on that law and other consumer protection laws. Regardless of

choices made here, procedures used should be well-specified and followed consistently to avoid lawsuits and negative legal judgments.



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

6 International Differences in Laws Related to Selection Use in Social Media

Tải bản đầy đủ ngay(0 tr)

×