AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal

Faculty of Law, UNSW
You are here:  AustLII >> Databases >> University of New South Wales Law Journal >> 2014 >> [2014] UNSWLawJl 26

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Burdon, Mark; Harpur, Paul --- "Re-conceptualising Privacy and Discrimination in an Age of Talent Analytics" [2014] UNSWLawJl 26; (2014) 37(2) UNSW Law Journal 679


RE-CONCEPTUALISING PRIVACY AND DISCRIMINATION

IN AN AGE OF TALENT ANALYTICS

MARK BURDON[*] AND PAUL HARPUR[**]

I INTRODUCTION

The draft has never been anything but a fucking crapshoot ... We take fifty guys and we celebrate if two of them make it. In what other business is two for fifty a success?[1]

Billy Beane’s quote from Moneyball represents a difficulty that most employers face on a regular basis and continually dread, namely, the recruitment and retention of employees. Employee recruitment and retention have always been contentious and complex decisions for employers. Historically, hiring was based on social processes of human interaction – a prospective employee traditionally submitted a job application and a manager would decide whether or not to call the person in for an interview. The traditional process is by no means perfect, as exemplified in Moneyball. Beane realised that new data-driven methods of improving player selection were required. Recruitment decisions needed to be founded on statistical data rather than the irrelevant and prejudicial instincts of time-honoured experience.[2] The subsequent success of the Oakland A’s is often touted as a justification for the use of ‘big data’[3] in the workplace, or ‘talent analytics’ as it is commonly called.[4] Talent analytics has opened up new employer opportunities which use predictive techniques to improve the accuracy of recruitment and retention decisions.[5] Moneyball encapsulates the start of a journey that is gathering increasing momentum. We are entering an age of predictive recruiting and retention which is challenging and changing the foundations of employee selection, with many potential positive benefits for both employers and employees.

However, we also contend that negative implications can arise through potential forms of discriminatory action that are very different to traditionally constructed forms of discrimination based on certain attributes, such as age, disability, race or sex. Discrimination in the talent analytics era can still be founded on these attributes but discriminatory decisions can now also be founded on random attributes generated through endless correlations of predictive patterns and segmentations founded on prescriptive actions. For example the web browser an applicant used to upload their job application[6] or when and where an employee has their lunch[7] are now potentially relevant factors in recruitment and retention decisions.

In order to find a balance between the benefits and the potential negative impacts of talent analytics, we put forward a new conceptual framework, an info-structural perspective which affords the viewer a different lens to consider these new problems and thus moves discussion away from the confines of first-generation anti-discrimination and information privacy laws. Our info-structural perspective highlights the potential dangers of prescriptive segmentation[8] and indicates that discriminatory practices can now be embedded in information infrastructures. Our info-structural perspective is grounded on Sturm’s formative work on structural discrimination.[9] We then suggest that new forms of info-structural due process could ameliorate issues of structural discrimination through the greater integration of information privacy law and anti-discrimination law.

We argue that processes at the heart of talent analytics could ultimately give rise to a new form of workplace discrimination, which we call, info-structural discrimination. Our concept of info-structural discrimination is not about discrimination by uses of information per se. Instead, it is about the potential for discriminatory practices to develop through information infrastructures in which unfairness and discrimination are embedded into the prescriptive processes and infrastructures of talent analytics. The construction of discriminatory exclusions under info-structural discrimination is consequently very different to traditional forms of discrimination and thus new ways of understanding the role of privacy and anti-discrimination law need to be conceptualised and developed.[10]

II TALENT ANALYTICS

Proponents of talent analytics[11] make a number of claims about the benefits that new analytical processes will provide employers. Analytical processes can scrutinise employee data to enhance an organisation’s competitive advantage.[12] Talent analytics can therefore lead to higher employee productivity and assist with the identification and retention of ‘top talent’.[13] It can also assist with employee recruitment as talent analytics makes it possible to identify the

key traits of an organisation’s most valuable employees and match those traits to the ongoing and future requirements of the organisation.[14] Talent analytics rationalises the recruitment process by ensuring that hiring decisions are no longer based on the vagaries of human rationality but are instead supported by statistical analysis.[15] Similarly in terms of employee retention, talent analytics provides new opportunities to reduce employee attrition rates and to identify the factors that will assist employers to retain staff.[16] Staff retention is an important issue for most industrial sectors as employers can waste thousands of dollars on training prospective employees only for those employees to drop out of the training before it is complete or within weeks of commencing work.[17]

The advent of talent analytics promises to resolve some of these retention dilemmas by providing employers with a means to better identify the traits of their employees and match those traits to the performance requirements of specific jobs.[18] It therefore becomes a win-win for both employee and employer as the employer can identify employees most suited to certain types of jobs and the employee is matched to a job that best suits their own traits.[19] However, the sophistication of the effective analytical task is dependent upon the sophistication of data collection processes, analytical processes and the development of new metrics and models.[20] The old adage of garbage in, garbage out is still pertinent to the world of talent analytics. It is therefore important to consider the core process behind talent analytics.

The first process is metadata generation. Metadata is data about data and in the context of workplace analytics, metadata generation refers to automated data produced by internal information systems regarding system use by employees. These metadata ‘breadcrumbs’ provide a treasure trove of information because they essentially compose a log of an employee’s activity throughout the working day.[21] For example, audit trail logs record when an employee used a particular device or database. Keystroke logging software records the keystroke movements on any given organisational keyboard. Swipe cards and card readers provide data on when an individual accessed or departed a secured office door. Telephone records detail the times and locations of phone calls. Even data from photocopiers can be used if configured correctly.[22]

Metadata generation is redundant without subsequent processes to collect, store and integrate generated data for future analysis. These processes of collection and storage are the second data process and are themselves automated and built into existing organisational information systems. Previously such data collection and storage was generally conducted for the purposes of internal security but this has now changed through the advent of talent analytics. Organisational information systems originally provided processes that enabled requirements and outputs. Now, such systems also enable organisational memory acquisition through metadata capture. We label these collection strategies as ‘by-product collection processes’ in which generated metadata is a by-product of existing informational systems. Targeted collection processes have also recently come to light. These data collection processes use specifically designed sensors, for example, the sociometer[23] to monitor and record employee actions, behaviours and patterns.[24] The sociometer was designed by the MIT Human Dynamics Laboratory[25] in the early 2000s and is a small device, similar to an identification card, worn by employees that consists of a number of different sensors. These sensors provide different data measurements such as location, sound and motion and thus it becomes possible to monitor the detailed activities of employees and their day-to-day interactions to provide a fine-grained insight into employee and organisational activity.[26] Quantitative surveys are another popular and common form of targeted data collection.[27]

Before data can be used for analytical purposes, collected metadata sets will generally have to be ‘cleansed’. Data cleansing is to a certain extent the forgotten process of data analytics and its purpose is often overlooked in favour of more exciting analytical processes, particularly predictive analytics. Data cleansing in this regard is nevertheless essential to the overall collection and analytical process because it identifies incompatibility issues within datasets, such as incorrect data formats that could impact upon the validity of results. Furthermore, data cleansing may also involve anonymising personally identifiable data and creating unique identifiers to enable multiple dataset aggregation.[28]

It is also important to note that generally two types of data are collected for analytical purposes. Structured data such as the types of metadata derived from internal information system audit trails and specified sensors, such as the sociometer, produce largely similar text or numerical formats that make it easier to use for analytical purposes. Structured data tends to require less cleansing as analytical processes will largely be designed around the availability of such organisational data. De-identification sub-processes may be significant though.[29] Unstructured data, on the other hand, can come in many different formats and from many different locations that may be external to the employing organisation. For example, unstructured data can include text, photographs, videos[30] and even the details of publicly available employee social media accounts, such as Facebook, which provide further insights into the behavioural preferences of employees.[31] It is the combination of structured and unstructured data that provides the background for maximising predictive capabilities because the combination of different data gives rise to unintuitive insights through unexpected correlations.[32] Meaning is given to these correlations in the form of predicted patterns generated by the final process, predictive analytics.

Predictive analytics entails the use of algorithms to data-mine cleansed organisational datasets in the search for unintuitive patterns. The search for unintuitive patterns is the ultimate goal of predictive analytics so that new correlative insights can be gained from existing datasets that provide a new understanding of how an organisation is functioning and could be functioning in the future. In essence, predictive analytics transits descriptions of data, albeit fine-grained and detailed, to predictions of outcome.[33] It is the predictive capacity of the analytical process that garners the most interest in the workplace setting. For example, it was widely reported last year, that a US company, Evolv, identified a correlation between the browsers used to upload a job candidate’s application and the future effective performance and retention of candidates post appointment.[34] Those candidates that used a browser downloaded from the Internet, such as Firefox or Chrome, were more likely to be effective employees and stay longer, than those candidates that used packaged operational system browsers, such as Internet Explorer or Safari.[35]

One of the key processes behind predictive analytics is segmentation. It involves the process of grouping together entities, or in this case, employees, based on shared similarities and it allows an organisation to identify and differentiate between different types of segments.[36] Segmentation stems from the discipline of marketing where its use is more readily visible and obvious. Segmentation allows organisations to learn more about the behaviour of certain groups within their overall customer cohort.[37] Once customer behaviours are more clearly identified, the organisation can then tailor and target resources, design choices and advertising strategies towards the behaviours of that specified segment.[38] For instance, purchasers of one product can be identified as suitable customers to receive coupon discounts for a related product.[39] A prior process of predictive analytics is consequently used to segment cohorts based on predicted outcomes. Predictive segmentation also creates another type of segmented scenario where a segment is identified but the identity or meaning of the group is yet to be established. In these situations, a predicted outcome is posited and meaning is subsequently attached.[40]

The next generation of data analytics, Analytics 3.0,[41] further extends the predictive elements of existing analytical frameworks through the advent of

two significant developments: embedded processes and prescription.[42] The

1950s, Analytics 1.0 company, focused on business intelligence and the use

of information systems to aid understanding of company operations.[43] Data collection was cumbersome and decision-making as a consequence was painstakingly slow and was further limited in scope by the restricted application of descriptive analytics and its inability to provide behavioural predictions.[44] By the mid-2000s, the Analytics 2.0 icons, Google, Amazon and eBay, radically extended the foundation of business intelligence through the use of ‘business analytics’.[45] Unlike the Analytics 1.0 company that focused on core business data, all data is relevant for the Analytics 2.0 company including new sources of data beyond internal company operations.[46] The vast new collections of data required new forms of data collection and this in turn spurred technological developments that created new devices and processes which could better generate, collect, monitor and analyse data through predictive processes.

Analytics 3.0 now addresses a further phenomenon – the embedded employment of analytical processes in businesses and across industries. The Analytics 3.0 company literally attempts to record, collect and analyse everything about itself and its industrial environment. Every internal organisational facet can be measured and examined in the search for new unintuitive correlations that provide new insight into organisational functions.[47] In essence, the embedded nature of these developments also leads to new forms of logic. The Analytics 3.0 company has to always collect new streams of data to produce new correlated patterns that provide new insight in order to be competitive.[48] Embedding analytics therefore has a circular effect. Employers become more and more dependent on analytical outcomes and it also becomes harder for employer decision-makers to avoid using analytics.[49] In that sense, the Analytics 3.0 company is different to its predecessors because it is infrastructural in nature both in terms of its requirement for industry-wide application and its embedded nature in organisational information infrastructures.

Analytics 3.0 also requires prescriptive analytics which ‘uses models

to specify optimal behaviours and actions.’[50] Unlike predictive analytics, prescriptive analytics provides modelled solutions for predicted outcomes.[51] Prescriptive analytics is consequently the next step forward and regards the actual implementation of predictive projections into prescriptive outcomes that seek to modify human behaviours that maximise operational benefits.

Prescriptive analytics goes beyond predicting future outcomes by also suggesting actions to benefit from the predictions and showing the decision maker the implications of each decision option. Prescriptive analytics not only anticipates what will happen and when it will happen, but also why it will happen.[52]

A prescriptive process can analyse and thus predict the attributes of successful employees by creating a model that identifies errant employee behaviour and puts forward modification solutions in relation to that behaviour.[53] The focus of prescriptive analytics is therefore not solely the probabilistic prospects of predictive analytics. Instead, the focus is about generating possible solutions to facilitate the results of predicted outcomes.

Prescriptive analytics is founded on optimisation which seeks to achieve the best outcome in acceptance with the complexity and uncertainty of existing information environments.[54] The prescriptive outcome therefore focuses on the development of the best, data-driven responses that maximise the use of employer resources.[55] This of course has always been a time-honoured problem for most organisations of all types as desired outcomes generally entail

the development of a number of possible options. Prescriptive analytics

is consequently the operationalisation, and potentially the automation, of decision-making predicated on predictive outcomes and probablised responses.[56] Moreover, the prescriptive models themselves become the means for

embedding analytics into key processes and behaviours.[57] The same circular logic consequently exists. Embedded prescriptive processes provide greater insight into key operational processes, such as employee behaviour, and the focus of decision-making thus shifts towards changing the behaviour of employees in order to facilitate more efficient operational processes.

The data collection aims, processes and requirements of the Analytics 3.0 company are different to their predecessors especially when the concepts of talent analytics and Analytics 3.0 are combined. This new form of talent analytics, Talent Analytics 3.0, heralds an era in which an ever-increasing cycle of more data will be collected about individual employees as prescriptive analytical processes become more and more embedded in operational processes, organisational cultures and employee behaviours. At the same time, employer decision-making, particularly in relation to recruitment and retention, will be founded on an ever-increasing prescriptive focus that seeks to modify, rectify and ameliorate employee behaviours to enhance organisational effectiveness as exemplified in the next part of this article.

A Talent Analytics in Action

As highlighted above, talent analytics promises to be a fertile ground regarding issues of workplace recruitment and retention. Some employers are starting to actualise the much vaunted rhetoric of ‘big data’ and are finding new insights about their employees and their recruitment practices. Relatively new, start-up firms, such as Evolv,[58] Visier,[59] Gild[60] and Knack[61] are being joined by consultancy titans, such as Deloitte,[62] Accenture[63] and IBM[64] in establishing a new and rapidly expanding market for talent analytics. So much so that the seismic effect of ‘data driven’ recruitment and retention is now beginning to become more visible. In so doing, the potential discriminatory effects that can flow from analytical decision-making are also becoming more appreciable.

According to Evolv, humans are ‘pretty bad at evaluating other human beings’[65] and thus selecting new employees and monitoring the performance of existing employees remains a significant challenge for employers. New analytical processes aim to assist employers with these difficult tasks by providing

new insights that are debunking many common recruiting assumptions.[66] Widely accepted, pre-analytical predictors of ‘good’ employees, namely, attendance

at the ‘right’ university, good GPA results and the quality of references are

now turning out to be less relevant factors despite the fact they have

founded historical recruitment decision-making for several generations.[67] The core assumptions of pre-analytical recruitment thinking are now being exposed as biased, irrelevant and ineffective.[68] Once given assumptions, such as job hopping or periods of unemployment, no longer equate to unreliable employees.[69] The same can also be said for prospective employees with criminal records. Analytical processes are now showing that people with a criminal background actually perform better than those without, in positions involving customer call centre employment.[70]

Taken for granted recruitment assumptions are thus being cauterised

and replaced with new unintuitive insights founded on predictive analytics.[71] Consequently, it is a candidate’s browser type that matters when they upload their application as much as the application itself.[72] A candidate who is creative but not overly inquisitive and is a member of one but no more than four social networks is more likely to be hired as a customer-care representative by Xerox, especially if they live close to the office and have access to reliable transport.[73] Software programmers who have never written open-source code software can be recruited for open-source code software programming positions if they have the right online profile and an interest in Japanese manga websites.[74] Even a certain combination of words in a tweet can now become a reliable indicator for a good software programming candidate.[75]

The same developments are also taking place in relation to employee retention. Those employees who socialise more frequently at the water cooler or in the office kitchen are not the slackers they were once thought to be. Instead, these are the employees who are more likely to contribute to a positive work culture and stay with an employer because they identify their employment environment as a place that provides cohesive community support, a key factor in employee retention in some industrial sectors.[76] It is also now possible to do the impossible: to predict whether high performing staff are thinking of leaving before they actually leave. Language patterns in email messages, as opposed to the content of such messages, now provide reliable indicators of unhappy star performers.[77] As do increasing number of visits to LinkedIn or Facebook.[78] By applying predictive analytics to an ever-increasing range of available data it becomes possible to paint a profile of those staff that are more likely to leave. In Microsoft’s case, for specified technical roles that profile included employees who were hired direct from university three or more years previously and had been promoted once.[79]

Similar advances are now taking place in Australia. Kronos, a US

‘global workforce management solutions’[80] firm, has an Australian subsidiary that specialises in most aspects of workplace resources management from tracking employee ‘clocking in’ times to monitoring workflow processes.[81] This allows employers to track a number of different employee dependent activities such as monitoring absenteeism by individual employees and organisational units,[82] examining the effectiveness of manufacturing processes in real time[83] and providing on demand, mobile access to analytics dashboards.[84] The Australian Human Resources Institute now runs training courses nationally on workforce analytics that focus on the use of workplace data ‘to provide insights into how the HR strategies we put in place drive business execution.’[85] Visier, one of the US innovators of talent analytics, already has an Australian partner, Navigo, a human resources technology vendor that specialises in workforce analytics.[86] Analytics departments have even permeated the university sector. The Australian National University has a workforce planning and analytics branch[87] and recently La Trobe University advertised a position for a workforce analytics advisor.[88]

Talent analytics now goes beyond prediction and incorporates the process of prescriptive actions. In terms of recruitment, human resources departments now actively seek to recruit those highly-prized employees in other companies who are looking at LinkedIn a bit too often.[89] HR officers now consider in greater depth those candidates that dropped out of university but have proven technical capacities.[90] As regards retention, employers are now starting to ignore instincts and are installing more amenities for employee socialisation as these amenities provide the environments of support and real day-to-day organisational decision-making.[91] Even the importance of an employee smile and its effect on customer satisfaction is now being measured and factored into employer decision-making about future promotions.[92]

It has to be acknowledged that these developments appear to be providing employers with significant economic benefits. As with other considerations of big data, there are many stories of success driven by predictions of the unintuitive. By maximising driver performance with route structures, UPS reduced its worldwide daily delivery schedule by 85 million miles. The cost savings were considerable as UPS estimated that saving one daily mile driven, by each driver saves the company overall US$30 million.[93] Evolv assisted a leading customer experience provider to predict which candidates would stay longer and would perform better which resulted in reduced attrition rates, improved customer care performance and saved the company US$5.5 million in 2012.[94] One of the outcomes of these successes is renewed investment in technological innovations that make it easier for HR managers to identify previously unknown employee risks for employers. For example, Visier provides a traffic light dashboard embedded into existing HR information systems that provides real-time predictions of employee satisfaction derived through performance and transactional data.[95] If the traffic light turns red, then HR action may be required.

However, the predictive and prescriptive focus of talent analytics may also have some significant consequences for workplace activities involving employees, as exemplified by this quotation:

The use of prescriptive analytics often requires changes in the way frontline workers are managed. Companies will gain unprecedented visibility into the activities of truck drivers, airline pilots, warehouse workers, and any other employees wearing or carrying sensors (perhaps this means all employees, if smartphone sensors are included). Workers will undoubtedly be sensitive to this monitoring. Just as analytics that are intensely revealing of customer behaviour have a certain ‘creepiness’ factor, overly detailed reports of employee activity can cause discomfort.[96]

In the next part, we outline our concerns in relation to talent analytics through the hypothetical ‘prescripted employee’, an employee whose behaviour is predicted and becomes the focus for prescriptive outcomes.

III THE PRESCRIPTED EMPLOYEE

We now attempt to identify the characteristics of the ‘prescripted employee’ to demonstrate how the embedded and prescriptive processes of talent analytics can use the informational attributes of individual employees to identify and modify employee behaviours. We contend that such actions may give rise to discriminatory practices. However, we further contend that the potential exclusionary impact of talent analytics in relation to recruitment and retention decisions will not necessarily be classified as a discriminatory practice under anti-discrimination law.

So what does this new work environment mean for the prescripted employee and what does the prescripted employee look like? An insight is provided in this extract from a prescriptive case study

Safety is a core value at Schneider. Driving sensors are triggering safety discussions between drivers and their leaders. Hard braking in a truck, for example, is captured by sensors and relayed to headquarters. This data is tracked in dashboard-based safety metrics and initiates a review between the driver and his/her leader. Schneider is piloting a process where the sensor data, along with other factors, goes into a model that predicts which drivers may be at greater risk of a safety incident. The use of predictive analytics produces a score that initiates a pre-emptive conversation with the driver and leads to less safety-related incidents.[97]

The Schneider example regards the use of analytics for workplace safety. However, the sensors employed, the data generated and the analytical frameworks adopted can equally be used for human resource decision-making. Instead of trying to predict which drivers are most at risk, the same sensors, data and analytics can also be used to predict those learner drivers in Schneider’s truck driving school programs that should be recruited, and those that should not.[98] Schneider now has the capabilities to match potential recruits to the desired attributes of the model ‘Schneider driver’, namely, a driver that meets the required safety-metrics and meets performance expectations. Predictive segmentation consequently takes place around a new series of recruitment and retention factors, such as hard braking, which make other traditional factors less relevant, such as previous employment, educational qualifications etc. Once recruits are segmented into groups that favour the desired safety-metrics, and those that do not, the prescriptive outcome can then take place. In the case of workplace safety, the ‘pre-emptive conversation’ is the prescripted outcome which provides a warning to the driver to modify behaviour and thus meet safety metrics. In the recruitment scenario, the ‘pre-emptive conversation’ involves confirmation or rejection of potential recruits.

The prescripted employee therefore is an employee whose workplace behavioural patterns are increasingly being predicted. Future behaviour is predicted on an employee’s own past actions and correlated against ranges of diverse data derived from unidentifiable populations of comparable employees. These predictions are thus founded on informational attributes that are increasingly random and unintuitive to both the employee and the employer. These informational attributes are to be found in an ever-increasing stock of

very varied data from internal and external sources.[99] Predicted patterns are then generated from which prescriptive outcomes arise. These outcomes of prescription seek to change the behaviour of existing employees or shape the behaviours of prospective employees to maximise existing and future employer operational benefits.

Prescriptive outcomes are not yet commonplace in the workplace but the Schneider example highlights how the prescriptive focus of talent analytics could be central to future workplace recruitment and retention developments. These prescriptive outcomes will form future employer decision-making and will be based on a range of different types of data that may or may not be connectable or identifiable by the employee or even the employer. We therefore contend that potential discriminatory practices in relation to the prescripted employee are consequently very different to those covered by traditional anti-discrimination law which are founded on certain protected attributes.

A The Protected Attributes of Anti-Discrimination Law

Anti-discrimination law defines a range of situations where discriminating on the basis of certain attributes is prohibited. These attributes are social constructs with underlying real facts.[100] Gender or physical ability are biological facts. How members of society construct these facts creates notions of sexism, racism and ableism. Lawmakers have determined that defined forms of discrimination should be reduced in certain relationships in society. Accordingly, anti-discrimination laws define the attributes and the circumstances where it is unlawful to discriminate against an individual based upon those attributes.[101]

Determining what forms of discrimination should be acceptable and prohibited represents a continuing social dialogue. Today, it is generally accepted that employers may be able to discriminate against people on the basis of their grooming, qualifications and capacity to efficiently perform occupational requirements. Some commentators have questioned whether employers should be able to discriminate based upon time spent using computers,[102] good looks or weight.[103] Then there are grounds, such as race or sex, that it is generally accepted should not form part of employers’ decision-making processes except in tightly regulated, exceptional circumstances. When deciding whether a ground for discrimination is acceptable or unacceptable, anti-discrimination law is drawn from a considerable antecedence of political and social conflict.

Prior to the emergence of anti-discrimination laws there was violent social discourse determining if discrimination based upon race or gender was acceptable or immoral.[104] The civil rights struggles in the United States led to the first anti-discrimination laws in the Civil Rights Act of 1964[105] which provided limited protection to racial, ethnic, national and religious minorities, and women. The limited range of attributes protected by the Civil Rights Act was extremely influential when Australia was drafting its anti-discrimination regimes.[106] The Commonwealth Parliament first protected against race discrimination in 1975 and sexual discrimination in 1984.[107] Attribute protection for disability was not enacted until 1992 and against age discrimination in 2004.[108]

While the decision to protect certain attributes follows adjustments in society’s moral compass,[109] other changes have been motivated by informational and scientific advances. In addition to the first-generation of animus-based discrimination, that features discriminatory actions predicated on physical and social attributes,[110] statistical discrimination occurs ‘when an individual treats members of a group differently because he believes group membership correlates with some attribute that is both relevant and more difficult to observe than group membership.’[111]

Statistical discrimination is therefore discrimination by irrational correlation of information in which the discriminator bases a decision on a certain informational quality linked to the social or physical attribute of a given group.[112] For instance, to use Strahilevitz’s employee hiring example, an employer does not want to employ a job candidate with a criminal background as he or she believes that is a good indicator of future criminal behaviour. However, the employer does not have access to the criminal records of candidates and thus makes a decision based on the understanding that individuals from certain racial groups are more likely to be involved in the criminal justice system.[113] The effect of this decision is to exclude certain individuals from employment prospects on the basis that their racial characteristics might correlate with some relevant informational quality.[114]

A further extension of statistical discrimination relates to the capacity to

use Deoxyribonucleic Acid (‘DNA’) and genetic testing to discriminate

against individuals.[115] While mapping the human genome has a range of positive

medical applications, it has also been used to construct models of people as genetically normal and others as defective or corrupted.[116] The close alignment

between eugenics, genetic discrimination and empirical evidence resulted

in considerable calls for genetic discrimination to be prohibited.[117] Certain jurisdictions enacted legislation which extended protection against genetic discrimination.[118] In Australia, genetic discrimination is now prohibited following the Disability Discrimination and Other Human Rights Legislation Amendment Act 2009 (Cth) which broadened the definition of ‘disability’ to include discrimination based upon ‘genetic predisposition’.[119]

Arguably there are strong parallels between discrimination caused by genetics and that caused by talent analytics as both do not focus on factors which can be observed in normal social interactions. Rather than focusing on observable physical characteristics, both of these potential forms of discriminatory action concern the use of information that diverges from an ideal construct. While there are similarities between these forms of discrimination, there are significant differences in the development of processes.

Most anti-discrimination statutes rely on so-called negative duties.[120] These laws focus on the individual act of discrimination and primarily rely on persons who have suffered discrimination to enforce their own rights.[121] Such laws are retrospective, in the sense the person who is discriminated against carries the onus of proof and the alleged discriminator has little obligations to proactively remove barriers to equality. The requirement for a duty holder to make reasonable adjustments and to avoid indirect discrimination does create some limited positive duties, but overall the traditional approach to anti-discrimination laws has been individualised and retrospective, and relies upon passive enforcement.[122]

Indirect discrimination provisions focus upon prima facie neutral policies and practices that have a disparate impact upon people with an attribute. Indirect discrimination occurs where an alleged discriminator either requires, or proposes to require the person who claims to have been discriminated against to comply with a requirement or condition, or fails to make a reasonable adjustment to enable them to comply.[123] The complainant must then prove that, due to the complainant’s attribute, they would only be able to comply with this requirement or condition if the discriminator made reasonable adjustments for the complainant, and that the alleged discriminator does not make such adjustments. Finally, the complainant must prove that the failure to make the reasonable adjustments has, or is likely to have, the effect of disadvantaging persons with their attribute.

We show in the next part that the predictive segmentation processes and prescriptive embedded nature of talent analytics may have potential discriminatory impacts that diverge from the traditional protected attributes and prohibited discriminatory practices of anti-discrimination law.

B The Prescripted Employee and the Issue with Protected Attributes

We contend that the potential discriminatory practices that could flow from talent analytics, in the form of the prescripted employee, are not automatically covered by anti-discrimination laws because they do not habitually involve decisions regarding a protected attribute. However, these decisions have the same exclusory impact, as an individual or a group of individuals has a prescriptive outcome employed against them based on a prediction of their likely behaviour.[124] We therefore contend that these predictive segmentation practices and prescriptive outcomes produce effects that are beyond the scope of traditional anti-discrimination law. Exclusions are being made, but they are being made on a whole range of random factors.[125] Recruitment exclusions are no longer founded on informational attributes related to the protected attributes traditionally covered by anti-discrimination law, for example, gender, race, ethnicity and sexual preference. Instead, a whole range of different informational attributes, such as performance data, location data, device data, transaction data and sociometric measurements, now found the basis for exclusionary decision-making and these attributes are not related to the traditionally protected attributes of anti-discrimination law.[126] We contend that some predictive segmentation practices and the use of prescriptive outcomes to modify employee behaviour, would be more likely to be covered by anti-discrimination law whilst others will not.

We argue that it is almost impossible to use existing anti-discrimination attributes to impugn analytical processes in the workplace. Even where discriminatory practices derived through talent analytics involves discrimination of protected attributes, establishing a link between the protected attribute and the discriminatory practice is likely to be evidentially insurmountable. While a prohibited attribute might be a factor in the predictive or prescriptive analytical process, as we will explain below, proving the existence of a discriminatory factor and how this factor was considered is almost impossible in analytical decision-making processes.

In this regard, the human decision-maker remains a key aspect in how anti-discrimination laws construct discrimination. Direct discrimination requires

an applicant to establish that a decision-maker considered a protected attribute

in making their decision.[127] Indirect discrimination requires the applicant to establish that having an attribute is a requirement of fulfilling a requirement or policy.[128] However, it is not always possible to identify a discriminatory act that can be impugned in court. This does not mean it is not possible to identify the human decisions that have resulted in the discriminatory outcome.

Moreover, where traditional models of discrimination depend on a human element, talent analytics often does not. The processes of metadata generation and data collection are largely automated. Once the data is collected, then analytical tools seek out connections between desired outcomes and the data

set. While the data scientist can monitor the operation of the software, largely

the software will learn from existing data sets what pieces of information

are positively and negatively connected to certain outcomes.[129] Monitoring the content of information used by the analytical tools is extremely difficult. To recommend a course of action, prescriptive analytics may need to make millions of computations based upon the different outcomes of descriptive and predictive analytical processes. As a consequence, there is a limit to what a human operative can do to control the analytical process, and identifying where an exclusionary decision is actually made becomes problematic.

We contend that it would be almost impossible for an applicant to establish that they have suffered direct discrimination because of the operation of the prescriptive analytic processes. Direct discrimination requires a complainant to establish that the alleged discriminator treated them less favourably because the complainant has a protected attribute.[130] The analytical processes of talent analytics usually involves millions of pieces of data.[131] All anti-discrimination statutes anticipate decisions where a protected attribute is just one of many reasons. All the Federal anti-discrimination statutes provide that if there are two or more reasons for discriminating, then unlawful discrimination will exist if one of those reasons is a prohibited attribute.[132] Accordingly, anti-discrimination laws will be enlivened even if the protected reason is not a dominant or substantial reason. Adverse action under Part 3-1 of the Fair Work Act 2009 (Cth) and anti-discrimination laws adopt different tests. Under the Fair Work Act an employer must establish that the protected attribute was not a ‘substantial and operative’ reason for the adverse conduct.[133] Even though laws recognise that discrimination can exist where there are multiple causes, establishing that protected attributes were any factor will be difficult. Even if a complainant could access the millions of pieces of data and the relevant algorithms made to found the predictive and prescriptive outcomes, subject to a smoking gun, it would be difficult to reverse engineer the processes to determine a sufficient link between an attribute and a conscious act on behalf of the employer.[134]

It also has to be acknowledged that complainants have reportedly encountered substantial difficulties in satisfying the elements of indirect discrimination in court.[135] We suggest that it would be extremely difficult to prove that an employer indirectly discriminated by using talent analytics. A complaint of indirect discrimination can only succeed where a requirement can be impugned. The mere fact that employment practices significantly disadvantaged female teachers, and that there was a 20 per cent gender gap across New South Wales, did not enable complainants in State of New South Wales v Amery[136] to draw a link between their employer’s requirements and their unfavourable treatment. The majority of the High Court reached this position through adopting a highly technical approach to identifying where a requirement is discriminatory.[137] We suggest a technical reading of indirect discrimination provisions will present difficulties to complainants seeking redress from discrimination flowing from data analytics.

While it may be possible to establish that scoring well on the analytical tool could constitute a requirement, complainants may find it difficult to establish that they experience disadvantage due to the application of a random informational attribute. It is also possible that employers will not have access to the algorithms used in the analytic processes and even if they did, they would not understand them. As highlighted above, many employers retain consultancy firms to rate the current and predict future performances of employees. The analyst will gain access to millions bits of data about all employees, customers, and social media, and compare this to historic data. This data will be analysed by algorithms and the employer will be provided with the predictions and recommended measures to improve outcomes. If an employee contends that this process was discriminatory, then that individual must impugn the facially neutral process. If there was a history of this process adversely impacting upon people with a certain attribute, then this might provide an indication that there was discrimination.[138] Absent such data, which is likely to be the case given the unintuitive nature of the predictive qualities of talent analytics, a key step in impugning the process is understanding how it operated.

Predictive algorithms have enabled companies, such as Google, to launch services that are worth billions of dollars.[139] Data analytics companies will therefore strongly resist providing access to algorithms associated with their descriptive, predictive and prescriptive analytics.[140] Even if an employee did obtain access to the algorithms, the innumerable pieces of internal and external data used in the analytical process will arguably mean that interpreting this process and explaining it coherently to a court represents a significant obstacle for complainants. Essentially, once information is collected and analysed, it is extremely difficult to draw a link between a prohibited attribute and unfavourable outcomes, which may or may not be discriminatory.

We contend therefore that a new approach to these potential problems needs to be considered through the introduction of an info-structural perspective which raises some fundamental questions about the nature of discrimination and the role of information privacy law in providing protections against discriminatory practices.

IV THE INFO-STRUCTURAL PERSPECTIVE AND POTENTIAL DISCRIMINATORY PRACTICES

Part III highlighted that the analytical processes of talent analytics adopts some of the same structural facets of traditional forms of discrimination. A group is segmented. Action is targeted towards that group and that action can have an exclusory effect for individuals in that group or for the group as a whole. However, because the segmentation of prescripted employees is no longer based on the protected attributes of anti-discrimination law, legal protections against exclusory acts may not be available. This gives rise to a fundamental question. If exclusions based on informational attributes that are not protected, for example, the type of browser a candidate used or how an employee uses the office kitchen or whether a candidate likes a type of Japanese cartoon, do not amount to recognised forms of discrimination then what, if anything, constitutes discrimination in the world of the prescripted employee?

We put forward a perspective in relation to that question that is derived from the structural and embedded nature of predictive and prescriptive analytics. Drawing on Sturm’s seminal work on structural discrimination,[141] we contend that a new conceptual framework is required to identify potential discriminatory practices through an info-structural perspective.

Sturm argued that first-generation forms of discrimination were caused through deliberate exclusions or subordinations of individuals or groups of individuals based on identifiable social and physical attributes.[142] Discriminatory practices were clearly identifiable because (a) the attribute was readily identifiable and (b) the application of the attribute could be clearly identified as an irrelevant factor for a given employment position. In that sense, first-generation discrimination refers to unequal treatment in relation to protected attributes ‘that violated clear and uncontroversial norms of fairness and formal equality.’[143] First-generation discrimination therefore involved clear and vivid moral imagery in which a general consensus could be reached about what constituted a discriminatory practice.[144] For example, the ‘Irish Need Not Apply’ sign on the front of the employer door.

What then of potential predicted and prescripted discriminations? What then of the sign on an employer’s website that states ‘Internet Explorer or Safari Users Need Not Apply’? This question does not entail the clear and vivid moral imagery of first-generation discriminations. Nor does it clearly conform to well understood ideas of discrimination in which the exclusionary effects were stark and the decisions related to those effects easily identifiable.[145] The reason for this of course, as highlighted in the previous part, is that the processes of predictive segmentation and prescriptive outcome are not founded on the deliberate exclusion of protected attributes such as gender or race. Moreover, the information attribute in question, the type of browser used to upload a job application, may or may not be relevant to the capacity to do the job applied for.

It was relevant when Evolv ran its predictive algorithms at a given point of time, on a given set of data but it could be less relevant if applied at a different point in time with the accumulation of different data. All of which makes it difficult to determine whether any attribute in relation to that job, or indeed most jobs, was by nature arbitrary, denigrating and unfair.[146] Decisions based on informational attributes that are outside the protected attributes of anti-discrimination law provide the opposite appearance to first-generation discriminations. They are surprising but not arbitrary because they are based on the unintuitive logic of prediction. They are not denigrating. Most of us would not take any offence at being derided for our browser choice. Moreover, whether certain people need not apply is considered algorithmically in a myriad of different ways that are never truly intuitive or definitive. After the predictive process is complete, some correlations might look unfair or curious, such as the decisions around browsers, and others would be discriminatory when the linkages are all made. The key point to note here is that no one knows what the predictive outcome will be and whether that prediction will be valid the next day or the one after that.[147] As such, an employee or potential employee will not be able to obtain sufficient access to determine how predictions are made and what data correlations were made to produce that prediction.

The potential discriminatory aspects of talent analytics do not match the concerns emanating from first-generation discrimination. Instead, we contend that discriminatory issues are akin to Sturm’s second-generation discrimination, structural discrimination. Sturm asserts that second-generation forms of workplace bias are structural, relational and situational.[148] These forms of bias are embedded into technological structures and decision-making interactions that produce exclusions which are frequent and organisationally necessary,[149] the result of which is discriminatory practices that are formed on patterns of interaction among groups within the workplace that eventually exclude non-dominant groups.[150] Moreover, the embedded nature of these interactions are difficult to trace to initiating actors and the embedded nature of discriminatory practices are therefore not automatically perceived by those discriminated against as discrimination. As a consequence, patterns of harassment, actions of exclusion and subordinate practices that freeze out social interaction become the organisational norm.

Discriminatory practice therefore becomes very difficult to identify as exclusions are no longer based on intentional efforts and are instead based

on ongoing patterns of interaction shaped by organisational culture.[151] These interactions shape employee conditions, access and opportunities and the conditions shape processes of inclusion and exclusion. The absence of systematic reflection has the effect of entrenching discriminatory interactions and thus organisational norms become so mired that forms of discrimination become difficult to unwind and separate from organisational cultures.[152] The boundaries of illegitimate and legitimate behaviour become difficult to identify because the complexities arising from embedded discriminations do not readily correspond to traditional definitions and the application of remedies through accepted legal recourses.

Does the ‘Internet Explorer or Safari Users Need Not Apply’ have a different complexion under a structural analysis? We contend that the application of a structural approach to talent analytics gives rise to a new perspective of the potential discriminatory issues that can arise, an info-structural perspective. This new perspective brings together three interlinked and currently separate elements: (1) technological infrastructures; (2) organisational information systems founded on analytical processes; and (3) embedded discriminatory practices. When these three elements are put together to create an info-structural perspective it allows the viewer to consider the potential discriminatory practices that could arise from the application of predictive segmentation processes and the intended behaviour modifications of prescripted outcomes.

Sturm argued that second-generation wrongs cannot be reduced to a universal or single theory of discrimination. Instead, normative theorisations of discrimination are plural, subtle and complex.[153] In essence, they challenge the clearly defined notions of discrimination based on protected attributes. As such, functionally structural discrimination defines discrimination to include ‘differences in treatment based on group membership, whether consciously motivated or not, that produce unequal outcomes.’[154]

This broader definition of discrimination encapsulates the discriminatory aspects of predictive segmentation processes which no longer correspond to a group defined by a protected attribute. The focus of prescriptive harm is differences of treatment attached to the membership of certain groups and those treatments produce unequal outcomes. The issue of browser use to determine candidate applicability now takes on a different perspective because the segmentation processes adopted to identify different cohorts now create different groups. Moreover, these groups can be treated in significantly different ways. A Firefox user is now a more likely candidate for a certain type of job than an Internet Explorer user. A candidate who is a known member of four social networks is less likely to be a good candidate for another job. An employee who socialises in the office kitchen is more likely to be a good performer.

But are these grouping treatments creating unequal outcomes? We contend that predictive segmentation processes have the capacity to create inequalities because the outcomes of segmentation are derived by the search for the unintuitive.[155] Individuals in segmented groups are almost never likely to know the reasons why they have been assigned to certain groups and why certain treatments have been applied to them.[156] How could a job candidate possibly know that the browser they have used to upload their application has impacted upon their success or failure? It is simply not intuitive. Thus when employers take steps to segment candidates into groups, based on unintuitive attributes, such as browser use, that treatment starts to produce inequalities. It is unfair on those individuals who have been segmented into a group that is less likely to be employed on the grounds that they belong to a segmented group founded solely on an unintuitive attribute and a predicted outcome.[157] Inequality therefore derives through the use of incomprehensibly relevant information attributes that have the same arbitrary capacities as the deliberate exclusions of first-generation discrimination.[158] The unknown unknowns are not just unknown. They have also the potential to be unfair and potentially discriminatory where protected attributes are involved.[159]

Even where a candidate becomes aware that browser use may impact on their employment success, the cyclical and never-ending nature of the predictive process still operates against them. The desirable browser one day will not necessarily be the same predictor as new data is analysed and new predictions are developed.[160] Consequently, knowledge of the value of segmented prediction affects all of us. Prospective candidates realise that their browser use is an important factor in the job that they are applying for so they adjust their behaviour. Browser use for uploading applications shifts and so with it does the metadata being generated, the data being collected and analysed, and thus the prediction – to the extent that browser use is no longer the reliable predictor that it used to be. Thus everyone embarks on the fruitless search for the unintuitive attribution that is going to give them the advantage and in doing so we all create this endless cycle.[161] The info-structural perspective is therefore infrastructural in nature.[162] The perspective highlights the increasingly interconnected nature of individual human existence because data about all candidates is necessary to truly fulfil the aims of talent analytics.[163]

The infrastructural element is also apparent within organisations. As highlighted above, Sturm warns about the dangers entailed in the absence

of systematic reflection which diminishes the ability of organisations to

root out embedded discriminatory practices.[164] We contend that the absence of organisational critical reflexivity is a key point here, namely, that non-critical acceptance of prescriptive outcomes is going to cement the dangers of predictive segmentation.[165] The historical development of analytics points heavily towards an ‘acceptance creep’. We contend acceptance creep operates in much the same as the more commonly used term ‘function creep.’ It is representative of a slippery slope from critical questioning of analytical results to unquestioning belief that results are correct.[166]

In this sense, it is important to acknowledge a key element of our argument. We are not arguing that the technological developments behind talent analytics are themselves the problem. Rather, it is the uncritical, and if unchecked, almost faith-based reliance on the veracity of predictive results which potentially provides the foundation for potential discriminatory practices. Take for example the history of data analytics from descriptive to prescriptive outcomes. With descriptive analytics an employer would have obtained a detailed report of historical conduct. Predictive analytics took this further and provided employers with a prediction to how employees might act in the future. Prescriptive analytics reduces the human element even further and advises the employer how they should act or direct their employees to act. The automated logic of these analytical processes thus creates the risk that existing prejudices are further enshrined by analytical tools that have the appearance of impartiality.[167]

This acceptance creep also has to be considered against the logical base of analytical processes. The gathering of information for talent analytics is by its nature heavily influenced by availability of data, market pressures and other variables.[168] The testing of correlations is done within pressured and confined environments. Unlike, genetic standards and tests, which are subjected to considerable scientific and public scrutiny,[169] where the links between genetic strings and certain medical conditions are retested in rigorous peer reviewed processes, the connections between predictive processes and prescriptive outcomes are not necessarily exposed to such robust processes of testing and verification.[170] Furthermore, unlike scientific outcomes where there are accepted standards for research, industry standards for reliability and validity are more fluid, particularly in relation to the speed with which ‘big data analytics’ has become the norm.[171]

The ethical nature of talent analytics and indeed all forms of business analytics is therefore important. Analytical processes tend to be performed for

a particular client by a for-profit entity, the analytical process could follow

quasi-scientific developmental processes or could be developed and applied in a far less rigorous manner than scientifically tested research. Even billion-

dollar corporations make substantial errors with analytics.[172] The construction of segmented groupings, the potential for flawed analytical tools and the uncertainty of opaque processes of gathering information arguably render discrimination flowing from talent analytics especially problematic because the processes are so embedded into organisational infrastructures, cultures and employee behaviours.[173] All of this points to a new form of protection that moves beyond the protection of attributes to include considerations of technological and data due process in the form of information privacy law.

V FROM THE PROTECTION OF ATTRIBUTES TO

INFO-STRUCTURAL DUE PROCESSES

We have thus far set out our argument. The predictive segmentation and prescriptive strategies of talent analytics have some of the same process hallmarks of traditional forms of discrimination. Individuals are segmented

into different groups and those groups can then be targeted for behaviour modification. In effect, different groups are treated differently. We contend

that the different treatments accorded may not give rise to discriminatory practices under anti-discrimination law because those groups are not founded on traditional protected attributes. We then put forward a new framework, an info-structural perspective which allows the viewer a different outlook of

potential discriminatory practices based on structural discrimination. In this

final substantive part, we briefly outline the argument for a new form of

anti-discrimination protection, based on information privacy law that seeks to imbue fairness into information infrastructural processes as well as protecting informational attributes. It is not within the scope of this paper to cover the information privacy issues in depth so we use this last part to herald future research in this area.

If we accept that the problem of potential discriminatory practices arising from talent analytics emanates from the use of non-protected attributes for segmentation and targeted action of specified groups, then a solution becomes relatively clear. We need to protect those attributes that are used for segmentation and prescription. This includes, for example, the browser used, the websites visited and the location of socialisations in the workplace. But what are these informational attributes? They are definitely not the social and physical characteristics of first-generation anti-discrimination law. Instead, these attributes are snapshots, insights if you like; into the behavioural existence of individuals which can be used for infer predictions of future behaviours. These information attributes are akin to personal information, information about individuals or information that relates to individuals.[174] An obvious solution arises, namely, personal information becomes a protected attribute of anti-discrimination law.

Personal information as a protected attribute would then provide protections for segmented groups who have been targeted for prescripted discriminatory behaviour modifications and it would also provide a new norm that could assist to prohibit the info-structural aspects of talent analytics. The issue shifts however as to whether information such as browser uses, web histories and location tracking would be personal information in the context of predictive segmentation and prescriptive action. The conceptual basis of personal information is

partially yet inherently contextual.[175] Any piece of information can be personal information depending on the social context[176] that it is applied within and as long it identifies or reasonably identifies an individual. Accordingly, the random and unintuitive informational attributes that are at the heart of talent analytics do have the contextual capacity to be personal information. Moreover, the voracious nature of talent analytics and indeed all forms of ‘big data analytics’ is such that all information is relevant, including the seemingly irrelevant, because any piece of information has the capacity to identify an individual. The insatiable rationale of the unintuitive search therefore demands that all information should be treated as personal information.

There are several issues with this approach. Information privacy law was never designed to consider that all information should be classed as personal information.[177] The insertion of a ‘reasonable element’ into the definitions of personal information seeks to minimise the scope in which any piece of information can be personal information.[178] The limited Australian case law on this issue is an example in point. A reasonable identification involves a process of ‘singling out’ an individual[179] and factors in the organisational exigencies of cost, resources and skills.[180] A reasonable identification therefore involves actual rather than theoretical identification.[181] All of which points to the fact that all information should not be considered personal information from the perspective of information privacy law.

Information privacy law does not accord absolute rights to individuals. The Privacy Act 1988 (Cth) does not provide an absolute right to be let alone.[182] Instead, information privacy law attempts to provide a balance that provides limited rights of control and involvement for individuals in the processes of personal information exchange with the informational needs of public and private sector organisations.[183] The informational needs of organisations are therefore important and fundamental considerations in information privacy law. So much so that a number of commentators have argued that information privacy law, and in particular, Australian information privacy law overtly favours organisations over individuals.[184] The employee records exemption in the Privacy Act is an apt case in point.

Under the Privacy Act, personal information has to be held in a record[185] and certain types of record are exempted from the auspices of the Act. The key exemption in relation to talent analytics is the employee records exemption. Any act or practice directly related to a current employment relationship is exempted from the operation of the Act in relation to private sector organisations.[186] However, the exemption does not include future employment relationships and the Privacy Act is therefore applicable to personal information involving job applications.[187] An employee record is a record of personal information relating to employment and can cover a panoply of different types of information. Reasons for termination, terms and conditions of employment, performance details and leave details are therefore exempt from the Act.[188] The legislative intention is apparently clear. The protections provided under the Privacy Act are not intended to interfere with the operation of workplace relations legislation.[189] The Federal Court has also made similar soundings and rejected the argument that the information privacy rights arising out of the Privacy Act are a workplace right protected by the Fair Work Act.[190]

Applying this brief overview of the conceptual basis of information privacy law and the operation of the Privacy Act to talent analytics, it is clear that there would be significant difficulties in making personal information a protected attribute of anti-discrimination law. All of which again suggests that the basis of information privacy law was not designed to provide the same level of protections of anti-discrimination law.[191] Should the adoption of information privacy law and the rights it provides be rejected from the scope of anti-discrimination law?

To answer that question we contend that it is important to reconsider the info-structural perspective put forward in this article. The info-structural perspective is not about informational attributes. It is about the dangers of embedding structures of bias into organisational and societal information infrastructures. An info-structural perspective does not result in the conclusion that personal information should be a protected attribute of anti-discrimination law. Or that all information should be personal information. Rather, it points to the process protections of information privacy law as a means to protect against the embedded biases and cultural inequalities of structural discrimination.[192] In effect, if all information has the capacity to be personal information then the users of data for analytical purposes should be mindful of information privacy obligations.[193] The info-structural perspective points towards information privacy law as a means to provide infrastructural and technological due process embedded in the heart of predictive segmentations and prescriptive outcomes. The work of Keats Citron,[194] Pasquale[195] and Crawford and Schultz[196] are important in this regard. However, given our focus of attention we label such due process considerations info-structural rather than technological due processes[197] or procedural data due processes.[198]

The application of info-structural due process would enable the incorporation of the process protections of information privacy law into the embedded structures of talent analytics. Doing so, would assist in ameliorating the more voracious aspects of talent analytics starting with metadatafication, the treatment of content as just another form of metadata because of its machine-readable capacity.[199] For example, as highlighted above, the words in a tweet can be used to identify whether a programmer is thinking of leaving a company. Talent analytics does not class this as communications or personal content. Instead it is metadata, and patterns of behaviour can be identified through combinations of words. The words are thus metadata for machine-readable algorithms. Info-structural due process would require that job applicants in this situation were provided with meaningful notice about how their tweets could be subsequently analysed and the consequence that could flow from this analysis. The

effect would be threefold. First, candidates would have an opportunity to

adjust their behaviour thus reducing the potential inequality of predictive

segmentation strategies.[200] Second, candidates could also then complain as they have knowledge of the decision-making process.[201] Finally, employers would be forced to think about the consequences of analytical application for individuals which would question the taken-for-granted process of metadatafication. Info-structural due process would therefore provide the means for employer critical self-reflexivity that Sturm rightly contends is a necessary element of protections against structural discrimination.[202]

It is beyond the scope of this paper to demonstrate in greater depth how info-structural due process could operate. Nevertheless, we can say that the development of info-structural due process is in keeping with the rich and contemporary information privacy law literature developed over the last several years. Part of that literature provides a strong and justified criticism of information privacy law’s process protections.[203] There is not enough scope in this article and our research is not advanced enough to put forward a significantly developed discussion on future legal options in relation to info-structural implementations. As such, we hope this article can be used as a discussion point for such considerations rather than be read as a justification for the continued and uncritical use of information privacy law protections. That is not the case. Rather, we agree that information privacy law is one of the main bulwarks to prevent risks arising from analytics[204] and therefore seems to be a necessary and logical starting point for thinking about info-structural due processes. We do however agree that information privacy law is showing its age and lawmakers do not seem to appreciate this factor.[205]

Despite these criticisms, we contend that the recent information privacy law literature on this subject provides an indication of the types of info-structural due processes that could be implemented. These could include: enhanced use of meaningful notification strategies;[206] limiting the use of information for analytical processes;[207] providing a transparent foundation for predictive segmentation and prescriptive outcome strategies;[208] enhanced data cleansing processes that minimise the risk of inaccuracy;[209] proficient de-identification structures;[210] and the foundation of an ethical base for ‘big data analytics’.[211] The implementation of such strategies will cause much controversy,[212] create many complexities[213] and restrict the seemingly unbounded and relentless effectiveness of the unintuitive quest.[214] However, the importance of info-structural forms of due process should not be overlooked because they are an essential tool to counteract ‘acceptance creep’ and they provide a foundation for a rich, difficult and important future discourse for those persons affected by talent analytics, namely, all of us.[215]

VI CONCLUSION

Descriptive, predictive and prescriptive analytics have significantly altered how many employers operate. The nature of talent analytics means that traditional anti-discrimination laws struggle to adequately regulate these processes. The info-structural perspective we put forward harks to a new form of discriminatory practice, info-structural discrimination, in which discriminatory practices founded on the process of predictive segmentation and prescriptive outcomes are embedded within organisational, industry-wide cultures and society as a whole. These processes can produce inequalities through the segmentation

of employee groupings based on unintuitive attributes that are ever-changing. Info-structural discrimination is therefore infrastructural in nature.

Within organisations, it warns of acceptance creep and simply accepting the implementation of prescriptive processes without full consideration of the consequences for employees. Within society, it warns of the complicated impacts of the unbounded limits of predictive logic and the never-ending cycles they will ultimately produce.

We are not talking about a generational shift here. It is not the next generation of discrimination. Instead, we are talking about a paradigm shift in which the processes of discriminatory practice are transcended into information infrastructures. Info-structural discrimination is therefore not about discrimination by information usage per se because segmented and prescripted discriminations are no longer based on combining information and protected attributes. Instead, info-structural discrimination refers to discrimination by the embedded analytical practices of predictive segmentation and prescriptive actions.

An info-structural perspective creates an entirely new paradigm for how we approach the inequalities that could arise through talent analytics that goes beyond the attributional restrictions of protected physical and social characteristics. We contend that the simple solution of making personal information a protected attribute is not viable. Instead, we call for the development of info-structural due processes formed on the process protections of information privacy law as a means to ameliorate the embedded biases and cultural inequalities of info-structural discrimination. We hope that the info-structural perspective and due processes highlighted in this article will give rise to a new dimension as to what constitutes discrimination in a ‘big data world’ and the information privacy and anti-discrimination law protections that are required.


[*] Lecturer at the TC Beirne School of Law, The University of Queensland.

[**] Lecturer at the TC Beirne School of Law, The University of Queensland. The authors gratefully acknowledge the helpful comments provided by Heather Douglas, Graeme Orr, Mark Andrejevic and the anonymous reviewers. The authors would also like to thank Dr Robert Vogt for very helpful discussions regarding analytical processes. Any errors are solely attributable to the authors.

[1] Michael Lewis, Moneyball (WW Norton, 2003) 17 detailing comments of Billy Beane, the then general manager of the Oakland A’s baseball team, describing the difficulties of selecting new players through recruitment drafts.

[2] See ibid 24–6 regarding the clash of recruitment cultures.

[3] We use the term ‘big data’ hesitantly given the ephemeral way in which the term is used and the rhetorical underpinnings that are often used as a justification for its own existence. We do however agree with Hartzog and Selinger that if the term ‘big data’ has any utility, it is in its understanding as a heuristic term; a term that frames a complex and general issue. See Woodrow Hartzog and Evan Selinger, What You Don’t Say About Data Can Still Hurt You (21 November 2013) Forbes <http://www.forbes.com/sites/

privacynotice/2013/11/21/what-you-dont-say-about-data-can-still-hurt-you/>.

[4] See, eg, Thomas H Davenport and Jeanne G Harris, Competing on Analytics (Harvard Business School Press, 2007) 17–21, 78 regarding the use of analytics for player selection and recruitment in US sport. It should also be noted that ‘talent analytics’ can also be termed different names such as ‘HR analytics’, ‘people analytics’ and ‘workforce analytics’. See, eg, ‘HR Analytics’: Alec Levenson, ‘Harnessing the Power of HR Analytics’ (2005) 4 Strategic HR Review 28; ‘People Analytics’: Ben Waber, People Analytics: How Social Sensing Technology Will Transform Business and What It Tells Us about the Future of Work (FT Press, 2013); ‘Workforce Analytics’: Kronos, Workforce Analytics <http://www.kronos.com/labor-analysis/workforce-analytics-features.aspx> .

[5] Davenport and Harris, above n 4, 79–81.

[6] E H, ‘How Might Your Choice of Browser Affect Your Job Prospects?’ on The Economist, The Economist Explains (10 April 2013) <http://www.economist.com/blogs/economist-explains/2013/

04/economist-explains-how-browser-affects-job-prospects>.

[7] Waber, above n 4, 79–80, 105.

[8] Prescriptive segmentation is an analytical process which segments employees into different cohorts and targets those cohorts with different prescriptive outcomes.

[9] Susan Sturm, ‘Second Generation Employment Discrimination: A Structural Approach’ (2001) 101 Columbia Law Review 458.

[10] It is important to note that the nature of our research cuts across established disciplinary boundaries internal and external to law. Given the limited scope of our paper, and the expansive nature of the topic matter, it has not been possible to cover all relevant literature. A case in point is the recent ‘sociology of work’ literature highlighted by one of the anonymous reviewers. We acknowledge the importance of this literature to this issue and the fact that this work is not detailed is a reflection of the limits of our interdisciplinary knowledge rather than any suggestion that is it not pertinent or relevant.

[11] It should be noted that the development of talent analytics is more prominent in the US but there are indications of Australian use and implementation, as detailed at Part II.A below.

[12] Thomas H Davenport, Jeanne Harris and Jeremy Shapiro, ‘Competing on Talent Analytics’ (2010) 88(10) Harvard Business Review 52, 54.

[13] Ibid.

[14] See, eg, Thomas H Davenport, Jeanne G Harris and Robert Morison, Analytics at Work (Harvard Business Press, 2010) 105.

[15] Andrew McAfee and Erik Brynjolfsson, ‘Big Data: The Management Revolution’ (2012) 90(10) Harvard Business Review 60, 62.

[16] Ron Eldridge, ‘Conduct a Proper Analysis of Exit Data to Find Out Why Employees Really Leave’ (2008) 14(4) People Management 70.

[17] See, eg, Talent Analytics, Case Study: Raw Talent Traits Correlated to Attrition <http://www.talentanalytics.com/resources/case-studies/> Alec Levenson, ‘Using Targeted Analytics to Improve Talent Decisions’ (Center for Effective Organisations, January 2011) <http://ceo.usc.edu/pdf/

G11-03.pdf>; Chris Sorensen, ‘The New Boss: Big Data’ (2012) 125(42) Maclean’s 40.

[18] Pasha Roberts, How Talent Benchmarking Slashed Call Center Attrition (6 September 2013) DataInformed <http://data-informed.com/talent-benchmarking-slashed-call-center-attrition/> .

[19] See, eg, Scott Mondore, Shane Douthitt and Marisa Carson, ‘Maximising the Impact and Effectiveness of HR Analytics to Drive Business Outcomes’ (2011) 34(2) People & Strategy 20, 21.

[20] Ibid.

[21] See Waber, above n 4, 7.

[22] See Omer Tene and Jules Polonetsky, ‘Big Data for All: Privacy and User Control in the Age of Analytics’ (2013) 11 Northwestern Journal of Technology and Intellectual Property 239, 247–50.

[23] See, eg, Waber, above n 4, 10–11.

[24] See, eg, Tanzeem Choudhury and Sandy Pentland, ‘Sensing and Modeling Human Networks Using the Sociometer’ (Paper presented at the International Conference on Wearable Computing, White Plains, New York, 2003) <http://www.cs.cornell.edu/~tanzeem/pubs/choudhury_iswc2003.pdf> .

[25] See MIT Human Dynamics Laboratory <http://hd.media.mit.edu/> .

[26] Needless to say, the use of such sensors has been controversial. See, eg, John Hall, ‘Is Your Boss Watching You? Surveillance Device Tracks Employees’ Movements in the Office, Sends Details of Conversations and Even Times Their Toilet Breaks’, Daily Mail (online), 6 February 2014 <http://www.dailymail.co.uk/sciencetech/article-2552858/Workplace-surveillance-device-tracks-employees-movements-office-sending-boss-details-conversations-colleagues-long-time-spend-toilet.html> .

[27] See, eg, Waber above n 4, 4.

[28] Rahul Saxena and Anand Srinivasan, Business Analytics: A Practitioner’s Guide (Springer, 2013) 38.

[29] Irv Lustig et al, ‘The Analytics Journey: An IBM View of the Structured Data Analysis Landscape: Descriptive, Predictive and Prescriptive Analytics’, [2010] (November–December) Analytics 11 <http://www.analytics-magazine.org/november-december-2010/54-the-analytics-journey.html> .

[30] See Intel IT Center, Big Data 101: Unstructured Data Analytics (June 2012) Intel <http://www.intel.com.au/content/www/au/en/big-data/unstructured-data-analytics-paper.html?

wapkw=unstructured>.

[31] Asking for the Facebook passwords of prospective employees has been a controversial issue resulting in legislation in some US states. See, eg, David Kravets, 6 States Bar Employers From Demanding Facebook Passwords (1 February 2013) Wired <http://www.wired.com/threatlevel/2013/01/password-protected-states/> .

[32] Lustig et al, above n 29.

[33] See Saxena and Srinivasan, above n 28, 5.

[34] E H, above n 6; Eamon Javers, Inside the Wacky World of Weird Data: What’s Getting Crunched (12 February 2014) CNBC <http://www.cnbc.com/id/101410448> .

[35] Don Peck, ‘They’re Watching You at Work’ (2013) 312(5) Atlantic 72 <http://www.theatlantic.com/

magazine/archive/2013/12/theyre-watching-you-at-work/354681/>.

[36] Andrew D Banasiewicz, Marketing Database Analytices: Transforming Data for Competitive Advantage (Routledge, 2013) 187.

[37] See, eg, Davenport, Harris and Morison, above n 14, 83.

[38] See ibid 84–5, 87 and the targeting elements of predictive analytics which results in the question ‘Do we have a good target?’

[39] Ibid 85–6.

[40] For example, the analytics identifies a segment of individual customers but the relevance of that segment is yet to be identified.

[41] Thomas H Davenport, ‘Analytics 3.0’ (2013) 91(12) Harvard Business Review 64.

[42] Ibid 69.

[43] See, eg, H P Luhn, ‘A Business Intelligence System’ (1958) 2 IBM Journal of Research and Development 314.

[44] See Bart W Schermer, ‘The Limits of Privacy in Automated Profiling and Data Mining’ (2011) 27 Computer Law & Security Review 45, 46.

[45] See Thomas H Davenport, ‘The New World of Business Analytics’ (International Institute for Analytics, March 2010) 4 <http://www.sas.com/resources/asset/IIA_NewWorldofBusinessAnalytics_

March2010.pdf>. See also Lawrence S Maisel and Gary Cokins, Predictive Business Analytics: Forward-Looking Capabilities to Improve Business Performance (John Wiley & Sons, 2014).

[46] See Davenport, above n 41, 67.

[47] See ibid 66.

[48] Mark Andrejevic and Mark Burdon, ‘Defining the Sensor Society’ (2015) forthcoming Television and New Media. See also Davenport, above n 41, 69–72 regarding the importance of continual data collection and analytical processes.

[49] See, eg, Davenport, above n 41, 69 stating ‘which is usually a good thing.’

[50] See ibid 70.

[51] See, eg, Davenport, Harris and Morison, above n 14, 83 stating ‘[s]egmentation in turn enables differentiated action – treating individual customers differently, or choosing the most efficient path in a flexible business process.’ Differentiated action is consequently the progenitor of prescriptive action.

[52] See Michael Walker, Prescriptive Analytics (27 August 2013) Data Science Central <http://www.datasciencecentral.com/profiles/blogs/prescriptive-analytics> .

[53] See, eg, Thomas H Davenport and Jill Dyché, ‘Big Data in Big Companies’ (International Institute for Analytics, May 2013) 28 <http://www.sas.com/resources/asset/Big-Data-in-Big-Companies.pdf> .

[54] Lustig et al, above n 29.

[55] See, Davenport, Harris and Morison, above n 14, 121, 129, 135.

[56] See also Ian Kerr and Jessica Earle, ‘Prediction, Preemption, Presumption: How Big Data Threatens Big Picture Privacy’ (2013) 66 Stanford Law Review Online 65, 67 regarding consequential and pre-emptive predictions.

[57] Davenport, above n 41, 70. See also ibid regarding prediction services.

[58] Evolv, Home Page <http://www.evolv.net/> regarding workforce analytics.

[59] Visier, Home Page <http://www.visier.com/> also regarding most aspects of workforce analytics.

[60] Gild, Home Page (2014) <http://www.gild.com/> regarding software programmer recruitment.

[61] Knack, Home Page <http://knack.it/> regarding the use of online games to identify employee traits.

[62] Deloitte, Workforce Retention Analytics <http://www.deloitte.com/view/en_US/us/Services/additional-services/deloitte-analytics-service/02e4b9925fb3e310VgnVCM2000003356f70aRCRD.htm>

[63] Accenture, Technology Services for Human Capital Management: Workforce Analytics <http://www.accenture.com/us-en/Pages/service-human-capital-workforce.aspx> .

[64] IBM, Cognos Workforce Performance Talent Analytics <http://www-03.ibm.com/software/products/en/

workforce-talent-analytics/>.

[65] Aki Ito, ‘Hiring in the Age of Big Data’ [2013] (October) Bloomberg Businessweek 40, 41.

[66] Ibid.

[67] Peck, above n 35.

[68] See, eg, ibid, citing an interview with sociologist, Lauren Rivera, who interviewed professionals from elite investment banks, consultancies and law firms about how those firms recruited and evaluated candidates. Rivera’s research indicated that an important factor behind hiring decisions was ‘shared leisure interests’ and that ‘assessors purposefully used their own experiences as models of merit.’.

[69] Ito, above n 65.

[70] ‘Robot Recruiters’, The Economist (6 April 2013) 78.

[71] See Kate Crawford and Jason Schultz, ‘Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms’ (2014) 55 Boston College Law Review 93. See also Tene and Polonetsky, above n 22, 253 about predictive risks.

[72] The often quoted Evolv example.

[73] Evolv, Xerox Finds Precision and Profit with Workforce Predictive Analytics <http://www.evolv.net/

success-stories/case-study-xerox/>.

[74] Peck, above n 35.

[75] Ibid.

[76] See Waber, above n 4, 84–7.

[77] Peck, above n 35.

[78] Ibid.

[79] Ibid.

[80] Kronos, Global Workforce Management Solutions <http://www.kronos.com.au/about-kronos/about-kronos-australia.aspx> .

[81] Kronos, Workforce Timekeeper <http://www.kronos.com.au/time-attendance/employee-time-tracking.aspx> .

[82] Kronos, Workforce Absence Manager <http://www.kronos.com.au/absence-management/absence-management-tracking.aspx> .

[83] Kronos, Workforce Analytics for Manufacturing <http://www.kronos.com.au/industry-solutions/

manufacturing/manufacturing-labour-analytics.aspx>.

[84] Kronos, Workforce Table Analytics <http://www.kronos.com.au/labour-analysis/workforce-tablet-analytics.aspx> .

[85] Australian Human Resources Institute, Workforce Analytics <https://www.ahri.com.au/education-and-training/short-courses/essential-hr-skills/workforce-analytics>.

[86] Visier, Creating the Business Case for Workforce Analytics: Quantifying the Business Value <http://visier.navigo.com.au/> .

[87] Australian National University, Requesting Data <http://hr.anu.edu.au/employment-at-anu/workforce-planning/requesting-data> .

[88] LinkedIn, Workforce Analytics Advisor <http://www.linkedin.com/jobs2/view/13801847> .

[89] See, eg, Entelo, Find & Engage the Talent You Need <https://www.entelo.com/products/search>.

[90] Peck, above n 35.

[91] Waber, above n 4, 86.

[92] Peck, above n 35 regarding Harrahs, the owners of various Las Vegas casinos and the smiling abilities of their croupiers.

[93] Davenport and Dyché, above n 53, 4.

[94] Evolv, Case Study: Novo 1 <http://www.evolv.net/success-stories/case-study-novo-1/> .

[95] Visier, HR Dashboard (2014) <http://www.visier.com/hr-dashboard/?doing_wp_cron=1376332934.6247689723968505859375> .

[96] Davenport, above n 41, 72. It should also be acknowledged that the author ended this quotation with the sentence ‘In the world of Analytics 3.0, there are times we need to look away.’ As we highlight below, the info-structural perspective also asks that employers look much closer at informational infrastructures.

[97] Davenport and Dyché, above n 53, 28.

[98] See Schneider, Truck Driving School Programs <https://schneiderjobs.com/company-drivers/driving-school-programs#findschool>. Schneider does not have its own truck driving school but it is plausible that the same sensors could be applied in truck driving schools. If not, we argue they soon will be. Furthermore, Schneider has its own graduate orientation program which features tractor simulators, in-truck driving and skills test that would be likely to involve sensor loaded vehicles. See Schneider, Recent Graduate Orientation <http://schneiderjobs.com/company-drivers/orientation-and-safety/recent-graduate-orientation> .

[99] For example the sensor derived data in the Schneider example. It should also be highlighted that the Schneider example is not as radical as, say, the Evolv example, as the informational attribute in this case study is consciously relevant to a decision about future employment. Nevertheless, the case study is useful as it highlights how predictive segmentation and prescriptive outcomes operate.

[100] For a discussion of how disability is a social construct with an underlying real fact, see Tom Shakespeare, Disability Rights and Wrongs Revisited (Routledge, 2nd ed, 2014) 59–61.

[101] Suzanne B Goldberg, ‘Discrimination By Comparison’ (2011) 120 Yale Law Journal 728. See also Bart Custers et al, ‘The Way Forward’ in Bart Custers et al (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases (Springer, 2013) 341, 343.

[102] Blake R Bertagna, ‘The Internet – Disability or Distraction? An Analysis of Whether “Internet Addiction” Can Qualify as a Disability Under the Americans with Disabilities Act(2008) 25 Hofstra Labor & Employment Law Journal 419.

[103] Anna Kirkland, ‘Think of the Hippopotamus: Rights Consciousness in the Fat Acceptance Movement’ (2008) 42 Law & Society Review 397; Shannon Liu, ‘Obesity as an “Impairment” for Employment Discrimination Purposes Under the Americans with Disabilities Act Amendments Act of 2008(2010) 20 Boston University Public Interest Law Journal 141.

[104] For a comprehensive analysis, see Joshua Bloom and Waldo E Martin Jr, Black Against Empire: The History and Politics of the Black Panther Party (University of California Press, 2013); Serena Mayeri, Reasoning from Race: Feminism, Law, and the Civil Rights Revolution (Harvard University Press, 2011).

[105] Civil Rights Act of 1964, Pub L No 88-352, 78 Stat 241.

[106] Neil Rees, Simon Rice and Dominique Allen, Australian Anti-discrimination Law (Federation Press, 2nd ed, 2014) 3–5.

[107] Racial Discrimination Act 1975 (Cth); Sex Discrimination Act 1984 (Cth).

[108] Age Discrimination Act 2004 (Cth); Disability Discrimination Act 1992 (Cth).

[109] For a current example, see the debate around the rights of same-sex couples which has led to the recent High Court of Australia judgment in Commonwealth v Australian Capital Territory [2013] HCA 55; (2013) 304 ALR 204. In this judgment the Court unanimously found that the Marriage Equality (Same Sex) Act 2013 (ACT) was inconsistent with the Marriage Act 1961 (Cth) and therefore was of no effect.

[110] Lior Jacob Strahilevitz, Information and Exclusion (Yale University Press, 2011) 140 stating that ‘Animus-based discrimination occurs when an individual treats members of a group differently because of (conscious or unconscious) antipathy towards that group.’

[111] Ibid. See also Lior Jacob Strahilevitz, ‘Privacy versus Antidiscrimination’ (2008) 75 University of Chicago Law Review 363.

[112] See, eg, Oscar H Gandy Jr, ‘Engaging Rational Discrimination: Exploring Reasons for Placing Regulatory Constraints on Decision Support Systems’ (2010) 12 Ethics and Information Technology 29, 36.

[113] Strahilevitz, above n 110, 141.

[114] Ibid.

[115] Rachel Bradshaw, ‘The Use and Misuse of DNA Profiles in Australia’ (2013) 37 Australian Bar Review 17.

[116] James C Wilson, ‘(Re)Writing the Genetic Body-Text: Disability, Textuality, and the Human Genome Project’ in Lennard J Davis (ed), The Disability Studies Reader (Routledge, 2nd ed, 2006) 67, 69, 71.

[117] Thomas Lemke, Perspectives on Genetic Discrimination (Routledge, 2013) 23.

[118] Charter of Fundamental Rights of the European Union [2000] OJ C 364/1, art 21 includes genetic features as one of 14 listed grounds. Genetic discrimination is in the Genetic Information Nondiscrimination Act of 2008, Pub L No 110-233, 122 Stat 881.

[119] Disability Discrimination Act 1992 (Cth) s 4(1) (definition of ‘disability’ para (j)). See also the amendments introduced by the Sex Discrimination Amendment (Sexual Orientation, Gender Identity and Intersex Status) Act 2013 (Cth) which extended the Sex Discrimination Act 1984 (Cth) s 4(1) to prohibit intersex discrimination based upon a person’s, inter alia, ‘genetic features’.

[120] See, eg, Raphael Gellert et al, ‘A Comparative Analysis of Anti-Discrimination and Data Protection Legislations’ in Bart Custers et al (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases (Springer, 2013) 61, 63 regarding a comparison of European Union anti-discrimination and data protection laws which feature ‘negative freedoms’ at their heart.

[121] An exception to this approach can be found in the operation of the Fair Work Ombudsman in the Fair Work Act 2009 (Cth) s 682. For a discussion, see Paul Harpur, Ben French and Richard Bales, ‘Australia’s Fair Work Act and the Transformation of Workplace Disability Discrimination Law’ (2012) 30 Wisconsin International Law Journal 190.

[122] Rosemary Owens, Joellen Riley and Jill Murray, The Law of Work (Oxford University Press, 2nd ed, 2011) 418–20. The only jurisdiction in Australia that includes a general positive duty is Victoria under s 15 of the Equal Opportunity Act 2010 (Vic). These duties have weak enforcement and cannot be enforced through civil litigation. See, eg, Dominique Allen, ‘Victoria Paves the Way to Eliminating Discrimination’ (2010) 23 Australian Journal of Labour Law 318; Paul Harpur, ‘A Proactive Duty to Eliminate Discrimination in Victoria’ (2012) 19 Australian Journal of Administrative Law 180, 180–3.

[123] Age Discrimination Act 2004 (Cth) s 15; Disability Discrimination Act 1992 (Cth) s 6; Racial Discrimination Act 1975 (Cth) s 9(1A); Sex Discrimination Act 1984 (Cth) ss 5(2), 6(2), 7(2).

[124] See, eg, Schermer, above n 44, 47; Tene and Polonetsky, above n 22, 255.

[125] See, eg, Gandy, above n 112, 37 regarding a wider notion of discriminatory actions.

[126] See Custers et al, above n 101, 342.

[127] Neil Rees, Simon Rice and Dominique Allen, Australian Anti-Discrimination Law (Federation Press, 2nd ed, 2014) 74.

[128] Ibid 117.

[129] See, eg, Gandy, above n 112, 30: discriminatory outcomes can be information processes themselves.

[130] Age Discrimination Act 2004 (Cth) s 14; Disability Discrimination Act 1992 (Cth) s 5; Sex Discrimination Act 1985 (Cth) s 5; Racial Discrimination Act 1975 (Cth) s 9(1).

[131] For instance, in the Evolv browser example, Evolv analysed the applications of over 30 000 individuals which included 30 million separate data points. See Peck, above n 35.

[132] Age Discrimination Act 2004 (Cth) s 16; Disability Discrimination Act 1992 (Cth) s 10; Sex Discrimination Act 1985 (Cth) s 18; Racial Discrimination Act 1975 (Cth) s 8.

[133] Board of Bendigo Regional Institute of Technical and Further Education v Barclay [2012] HCA 32; (2012) 248 CLR 500, 522–3 [56]–[59] (French CJ and Crennan J), 542 [127] (Gummow and Hayne JJ).

[134] See Schermer, ‘The Limits of Privacy’, above n 44, 47 regarding the complexity of predictive data mining algorithms that may learn to discriminate.

[135] Dominique Allen, ‘Reducing the Burden of Proving Discrimination in Australia’ [2009] SydLawRw 24; (2009) 31 Sydney Law Review 579.

[136] [2006] HCA 14; (2006) 230 CLR 174.

[137] See especially the judgment of Gummow, Hayne and Crennan JJ: ibid 198–9.

[138] Awad v Western Sydney Local Health District [2013] NSWADT 287 (‘Awad’). In Awad, a hospital required workers to have a specialist role to obtain a promotion. Out of the 12 positions, 4 workers of Arab background were not appointed to such positions. Accordingly the promotion policy was held to indirectly discriminate.

[139] See, eg, Phil Simon, Too Big to Ignore: The Business Case for Big Data (John Wiley & Sons, 2013) 18.

[140] Frank Pasquale, ‘Restoring Transparency to Automated Authority’ (2011) 9 Journal on Telecommunications and High Technology Law 235, 237.

[141] Sturm, above n 9.

[142] Ibid 467.

[143] Ibid.

[144] Ibid 474.

[145] Ibid 466.

[146] Ibid 467.

[147] See Crawford and Schultz, above n 71, 122.

[148] Sturm, above n 9, 461.

[149] Ibid 461.

[150] Ibid 469.

[151] Ibid 470.

[152] Ibid 471.

[153] Ibid 474.

[154] Ibid.

[155] See Cynthia Dwork and Deidre Mulligan, ‘It’s Not Privacy, and It’s Not Fair’ (2013) 66 Stanford Law Review Online 35, 38 calling for a greater focus on the risks of segmentation.

[156] See, eg, Custers et al, above n 101, 353.

[157] Ibid 352.

[158] See also Jonas Lerman, ‘Big Data and Its Exclusions’ (2013) 66 Stanford Law Review Online 55, 57 regarding unfairness arising from being excluded in big data analytical decisions.

[159] See Paul Ohm, ‘The Underwhelming Benefits of Big Data’ (2013) 161 University of Pennsylvania Law Review Online 339, 340.

[160] See also Crawford and Schultz, above n 71, 99 regarding the problems of asserting decision making certainty into predictive processes that are inherently uncertain.

[161] See Pasquale, above n 139, 237 regarding the dangers that can arise.

[162] The infrastructural element is more in keeping with data analytics as a sociotechnical problem. See Dwork and Mulligan, above n 155, 38.

[163] See Woodrow Hartzog and Evan Selinger, ‘Big Data in Small Hands’ (2013) 66 Stanford Law Review Online 81, 81.

[164] Sturm, above n 9, 471.

[165] See, eg, Schermer, ‘The Limits of Privacy’, above n 44, 47.

[166] Davenport acknowledges the dangers of such an approach. See Davenport, Harris and Morison, above n 14, 12: ‘The same process and logic errors that cause people to err without analytics can creep into analytical decisions.’ See also Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249, 1272 regarding automation bias and lack of critical motivation.

[167] See, eg, Custers et al, above n 101, 352. See also Davenport, Harris and Morison, above n 14, 178 regarding ‘metadecision analysis’ which suggests that organisational decision-makers ask the question ‘How should we make this decision?’ before the decision is made.

[168] See, eg, Saxena and Srinivasan, above n 28, 38.

[169] The process surrounding the finalising of the Human Genome Project illustrates the level of scientific rigour in genetics: Sarah Richardson, Sex Itself: The Search for Male and Female in the Human Genome (University of Chicago Press, 2013).

[170] See Schermer, ‘The Limits of Privacy’, above n 44, 48 regarding the difference between correlation and causation; Ira S Rubinstein, ‘Big Data: The End of Privacy or a New Beginning?’ (2013) 3 International Data Privacy Law 74, 76 also regarding correlation and causation. See generally Bart Custers, ‘Data Dilemmas in the Information Society: Introduction and Overview’ in Bart Custers et al (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases (Springer, 2013) 3, 16–17.

[171] See, eg, danah boyd and Kate Crawford, ‘Critical Questions for Big Data’ (2012) 15 Information, Communication & Society 662, 667.

[172] See, eg, Ylan Q Mui, ‘Wal-Mart Web Site Makes Racial Connections’, Washington Post (Online), 6 January 2006 <http://www.washingtonpost.com/wp-dyn/content/article/2006/01/05/AR

2006010502176.html>. The retail giant Wal-Mart’s analytical tools suggested that customers interested in Afro-American documentaries would also have an interest in the movie The Planet of the Apes.

[173] See, eg, Jules Polonetsky and Omer Tene, ‘Privacy and Big Data: Making Ends Meet’ (2013) 66 Stanford Law Review Online 25, 35 discussing the biases in classification systems.

[174] See, eg, Lee A Bygrave, Data Protection Law: Approaching its Rationale, Logic and Limits (Kluwer Law International, 2002) 42 regarding the role of definitions of personal information and information privacy law.

[175] See generally Mark Burdon and Paul Telford, ‘The Conceptual Basis of Personal Information in Australian Privacy Law’ (2010) 17(1) eLaw Journal: Murdoch University Electronic Journal of Law 1.

[176] See, eg, Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press, 2010).

[177] See, eg, Bart Schermer, ‘Risks of Profiling and the Limits of Data Protection Law’ in Bart Custers et al (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases (Springer, 2013) 137, 146; Tene and Polonetsky, above n 22, 258.

[178] See Burdon and Telford, above n 175, 15.

[179] NW v New South Wales Fire Brigades [2005] NSWADT 73, [11]–[12] (O’Connor DCJ); WL v Randwick City Council [2007] NSWADTAP 58, [21] (O’Connor DCJ, Member Higgins and Member Bolt), affd OS v Mudgee Shire Council [2009] NSWADT 315, [20] (Member Molony); Australian Law Reform Commission, For Your Information: Australian Privacy Law and Practice (Law Reform Commission, 2008) 296.

[180] WL v La Trobe University [2005] VCAT 2592; (2005) 24 VAR 23, 34 [52] (Deputy President Coghlan); WL v Randwick City Council [No 2] [2010] NSWADT 84, [30] (Member Higgins).

[181] Re Lobo and Department of Immigration and Citizenship [2011] AATA 705; (2011) 124 ALD 238, 329 [302] (Deputy President Forgie); OS v Mid-western Regional Council [No 3] [2011] NSWADT 230, [20] (Member Molony).

[182] Australian Law Reform Commission, For Your Information: Australian Privacy Law and Practice, Report No 108 (2008) 299.

[183] See, eg, Colin J Bennett and Charles D Raab, The Governance of Privacy: Policy Instruments in Global Perspective (MIT Press, 2nd ed, 2006).

[184] See, eg, Graham Greenleaf, ‘Chapter 5: Privacy in Australia’, in James B Rule and Graham Greenleaf, Global Privacy Protection: The First Generation (Edward Elgar, 2008); Graham Greenleaf, ‘“Tabula Rasa”: Ten Reasons Why Australian Privacy Law Does Not Exist’ [2001] UNSWLawJl 4; (2001) 24 University of New South Wales Law Journal 262; David Lindsay, ‘An Exploration of the Conceptual Basis of Privacy and the Implications for the Future of Australian Privacy Law’ [2005] MelbULawRw 4; (2005) 29 Melbourne University Law Review 131; Roger Clarke, Home Page <http://www.rogerclarke.com/> regarding numerous critiques of Australian information privacy law.

[185] Privacy Act 1988 (Cth) s 6(1).

[186] Ibid s 7B(3).

[187] Office of the Australian Information Commissioner, Coverage of and Exemptions from the Private Sector Provisions (Private Sector Information Sheet 12, 27 November 2007) <http://www.oaic.gov.au/privacy/

privacy-resources/privacy-fact-sheets/other/information-sheet-private-sector-12-2001-coverage-of-and-exemptions-from-the-private-sector-provisions>.

[188] See Privacy Act 1988 (Cth) s 6(1).

[189] See, eg, Office of the Australian Information Commissioner, above n 187.

[190] See, eg, Austin v Honeywell [2013] FCCA 662; (2013) 277 FLR 372, 393 [60] (Riley J).

[191] See Schermer, ‘The Limits of Privacy’, above n 44, 49 for a discussion about the complexities of protecting privacy and the efficacy of analytical frameworks.

[192] But see ibid 50 regarding scepticism that information privacy law alone will resolve potentially discriminatory problems. We contend that the embedded nature of info-structural perspectives would further enhance the application of information privacy law as a potential way of resolving discriminatory impacts that go beyond simply setting higher compliance measures. In that sense, the info-structural perspective is about embedding privacy considerations into information infrastructures.

[193] See Schermer, ‘Risks of Profiling’, above n 177, 145–6 regarding the possibility that all information could be personal information and the European regulatory approach.

[194] Citron, above n 166; Danielle Keats Citron and Frank Pasquale, ‘The Scored Society: Due Process for Automated Predictions’ (2014) 89 Washington Law Review 1.

[195] Pasquale, above n 140; Frank Pasquale, ‘Grand Bargains for Big Data: The Emerging Law of Health Information’ (2013) 72 Maryland Law Review 682.

[196] Crawford and Schultz, above n 71, 109.

[197] For the purpose of this article, we use the phrase info-structural due processes to highlight that the issues faced are infrastructural rather than purely technological or data process driven.

[198] Crawford and Schultz, above n 71, 109 regarding procedural data due process that seeks to ‘regulate the fairness of Big Data’s analytical processes’. See also Kerr and Earle, above n 56, 70 regarding presumptive due processes. Again, we gratefully acknowledge the thoughts of Graeme Orr at this juncture for pointing out to us the roots of due process are grounded in administrative law and the procedural fairness which it seeks.

[199] Andrejevic and Burdon, above n 48. See also Crawford and Schultz, above n 71, 98 regarding why organisational benefits arising from metadata use which essentially avoids information privacy law.

[200] See Crawford and Schultz, above n 71, 108 regarding the challenges of meaningful notice as a mitigation of predictive privacy harms.

[201] See ibid.

[202] Sturm, above n 9, 471.

[203] Custers et al, above n 101, 344–8.

[204] See, eg, ibid 354.

[205] See, eg, ibid 355. See also Gandy, above n 112, 31 regarding the limits of simply accepting information privacy law as a means of stopping discrimination.

[206] Citron, above n 166, 1305.

[207] Gandy, above n 112, 31; Tene and Polonetsky, above n 22 , 259.

[208] See, eg, Citron, above n 166; Tal Z Zarsky, ‘Transparent Predictions’ [2013] University of Illinois Law Review 1503; Pasquale, above n 140; Schermer, above n 44, 47; Tene and Polonetsky, above n 22, 269; Neil M Richards and Jonathan H King, ‘Three Paradoxes of Big Data’ (2013) 66 Stanford Law Review Online 41, 42 regarding the ‘Transparency Paradox’.

[209] See, eg, Crawford and Schultz, above n 71, 123.

[210] See, eg, Sara Hajian, Simultaneous Discrimination Prevention and Privacy Protection in Data Publishing and Mining (PhD Thesis, Universitat Rovira I Virgili, 2013); Faisal Kamiran and Toon Calders, ‘Data Preprocessing Techniques for Classification without Discrimination’ (2012) 33 Knowledge Information Systems 1; Ira S Rubinstein, Ronald D Lee and Paul M Schwartz, ‘Data Mining and Internet Profiling: Emerging Regulatory and Technological Approaches’ (2008) 75 University of Chicago Law Review 261, 268.

[211] See, eg, Neil Richards and Jonathan King, ‘Big Data Ethics’ (2014) forthcoming Wake Forest Law Review; Gandy, above n 112, 32, 35 regarding the ethics of segmentation; boyd and Crawford, above n 170, 672.

[212] See Custers et al, above n 101, 353 rightly highlighting the power dimensions involving governing elites and discriminatory practices.

[213] See Schermer, above n 177, 146.

[214] Note the objectivity assumptions in use that are used as a rhetorical underpinning for furthering the analytical world. See Davenport, Harris and Morison, above n 14, 137–8.

[215] It is a silent consideration in our article but at the heart of many of the issues presented by our research are power relations. Richards and King highlight briefly these important ‘Power Paradox’ issues as part of their discussion of big data. See Richards and King, above n 208, 45.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJl/2014/26.html