Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Government Politics

New York City Moves To Create Accountability For Algorithms (propublica.org) 183

The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. From a report: Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination -- the first measure of its kind in the country. The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers' lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation or citizenship status. The task force's report will also explore how to make these decision-making processes understandable to the public. The bill's sponsor, Council Member James Vacca, said he was inspired by ProPublica's investigation into racially biased algorithms used to assess the criminal risk of defendants. "My ambition here is transparency, as well as accountability," Vacca said.
This discussion has been archived. No new comments can be posted.

New York City Moves To Create Accountability For Algorithms

Comments Filter:
  • by Anonymous Coward

    It's often these same people who impose systematic discrimination based on these exact criteria.

    • Re: (Score:3, Insightful)

      by cayenne8 ( 626475 )
      Hmm.....so, if these councils reviewing these algorithms that are finding actual bonafide trends, that happen to break along racial, sexual, [insert special interest here]...and that don't happen to fit the politically correct meme of the day, that they will insist these be thrown out?

      So, facts....if inconvenient....are not to be used or trusted?

      Hmm...isn't that kinda deleting the purpose?

      • Re:Mirror (Score:5, Insightful)

        by CanHasDIY ( 1672858 ) on Tuesday December 19, 2017 @12:51PM (#55769493) Homepage Journal

        Right.

        Because apparently, in 2017, math became racist.

      • An algorithm is torturing me as we speak, they are worth a closer look. But its all about problems not intended by algorithm designers, so where liability lies is confusing. E.g a racially green programmer favors education for hiring, but education is correlated with money, and legacy racism made green families lower income than blue families, so the algorithm picks blue people overlooking green people.

        • That shouldn't really matter. If they are looking for educated people and more blue people are educated than green then the organization shouldn't have to worry about hiring less qualified people based on political correctness.

          Almost comically, these types of things also come from absolute hypocrites.

          If I say "Green people are less educated." I'm attacked for propagating a stereotype, yet the same people levying those attacks will say "You can't hire based on education because green people can't compete."

        • It's not "legacy racism", it's a full-standard-deviation lower average IQ that has made green people less educated. IQ is a strong predictor for success and has a strong genetic component that cannot easily be compensated for. OTOH, green people also have a toxic culture that celebrates failure, dependency, and violence; maybe algorithms could do something about that.
      • That's kind of the problem with these algorithms: they are simply applying statistical facts without bothering about causation. Humans tend to do that too, but we've decided that in certain cases its not OK to assume that statistically significant traits of certain groups apply to any individual belonging to that group. It may be true that people of whatever race are statistically more likely to be involved in crime, but it's not OK to deny an individual a loan on that basis, for instance.

        But when you u
        • It may be true that people of whatever race are statistically more likely to be involved in crime, but it's not OK to deny an individual a loan on that basis, for instance.

          Where does that stop though? Statistically men die earlier than women; is it wrong to charge men more for life insurance? Statistically women cost the medical system more than men; is it wrong to charge women more for health insurance? Statistically men are more likely to be involved in a traffic accident; is it wrong to charge men more for car insurance?

          • Not an easy question, and I'm hardly an expert on legal or ethical matters. But it seems to me that it's unfair to discriminate on traits where there is only an indirect correlation with undesirable outcomes. If men die earlier than women because of physiological traits, then perhaps it's ok to charge them more for life insurance (though insurers and goverments might not do or allow that for other reasons). But what if black people die earlier? Statistically speaking that's probably the case, but there
            • there is no direct correlation between being black and dying earlier

              You appear to be using "correlation" to mean "causation".

              A correlation can't be direct or indirect; it's a mathematical fact reflecting how changes in one variable correspond to changes in another. It says nothing about how or why.

          • It depends whether you are looking at correlation or causation. Say, for example, that men not only die earlier than women, they are also more likely to smoke than women. In this case, smoking is the cause of early death, not being a man. So the answer there is to charge smokers more than non-smokers.

        • That's kind of the problem with these algorithms: they are simply applying statistical facts without bothering about causation.

          But, if you're going for purely predictive results....what part does "causation" play in this at all?

      • that are finding actual bonafide trends

        What's a bonafide trend? How do you distinguish it from correctly identifying racism/sexism in the training data?

        • What's a bonafide trend? How do you distinguish it from correctly identifying racism/sexism in the training data?

          I"m not an AI expert, far from it....

          But I would have to imagine that you could at least start with training data that did NOT list race/sex categories and the just turn it loose and see what it finds on its own?

          And look, there ARE differences between the sexes and the races in things. I'm sure if you never told a race and AI studied the NBA vs all other careers...you'd find a lot of trends t

          • I'm assuming this is a legitimate question. It's hard to tell because it's a fairly similar argument to what trolls use.

            Basically, it seems unambiguous that racism (and sexism, but I'll limit myself to racism) existed in the past. The data fed into the AI will take into account racist decisions by humans. As a plausible example that we can pretend is true for this conversation, black people were given worse mortgage terms that led to more defaults. Therefore, the AI interprets black people as having less

            • Hmm.....

              Well, let's say that you start training the AI with more recent data....and if that still shows, without using race as a factor....that black people still are less likely to pay back loans and default or mortgages, negating the factors of the past, is it still wrong to do so?

              What if for the sake of argument (not saying it is true), that black people in general by virtue of data analysis, are more likely to be a credit/loan risk, is that still not basis to see it as a trend that should be considere

          • But I would have to imagine that you could at least start with training data that did NOT list race/sex categories and the just turn it loose and see what it finds on its own?

            Check out the story behind that whole story about that racist algorithm that decided on jail sentences.

            That is what they did. But you don't need sex/race/age data when e.g. income/education/prior convictions data lets you derive sex/race with a 99.9% result.

      • This is exactly the line of thought that Sweden followed to cover up the negative effects of mass migration. So, good luck with this, New York.

        • Which Sweden? Has to be that Sweden where Trump saw something happening last night and not the northern European country...

  • by Rick Schumann ( 4662797 ) on Tuesday December 19, 2017 @12:30PM (#55769355) Journal
    More and more so-called 'AIs' are being used in place of algorithms (due mainly to magical thinking) but even the designers of these AIs can't tell you what they're really doing under the hood. That's where we're going to get in trouble with regards to 'accountability'.
    • AIs do not have legal accountability. People have legal accountability, no matter what tools they use. Illegal discrimination conducted by scientific-sounding means is still illegal discrimination.

    • ...even the designers of these AIs can't tell you what they're really doing under the hood...

      What a garbage myth. It's simply untrue.

      The designers know what the AIs are doing, it's just never been a priority to make such explanations easily accessible. It is now, and so they're doing it; you're badly misinformed if you think the designers of these systems are clueless.

      • If Microsoft's "Tay" is any indication, teaching AI not to discriminate isn't going to be easy. She started off as a cheerful innocent teenager, and in under 24 hours the internet turned her into a raging Nazi.

        • ...in under 24 hours the internet turned her into a raging Nazi...

          This is just GIGO though. The internet is like 90% garbage content and trolling assholes - WTF did they expect using that as training data?!?!?!

  • Can people figure out how it discriminates against certain race or gender?

    • by Anonymous Coward
      It would use proxies for this information, sometimes ones that are not always obvious.
    • It's the same way I.Q. tests discriminate against conservatives without input on their political beliefs.
      • by Anonymous Coward

        I thought IQ discriminates mostly against blacks who tend to vote liberal. In fact, we can't even use IQ tests for things like employment anymore because the blacks just can't compete.
        IQ tests. Perfectly valid if they put conservatives in a bad light. Discriminatory, racist, pseudoscience, useless, completely subjective, etc. when they tell us things we'd rather not hear.
        But whatever suits your narrative.

    • Can people figure out how it discriminates against certain race or gender?

      The proposal here is to do a study to understand that, yes.

      You did notice that this article was about studying the problem to see if there is algorithmic discrimination, right?

      However, let me also point out that since the example discussed in the text was about DNA testing, I would point out that race and gender are encoded in DNA, so "does not have race/gender input" is not applicable here.

      In other cases, however, yes, it turns out that there can be race and gender encoded into input data even if it is

    • You can look at the outputs. It shouldn't be that hard.

      You can also look at if any of the inputs are proxies for race/gender.

  • More idiocy (Score:5, Insightful)

    by alvinrod ( 889928 ) on Tuesday December 19, 2017 @12:36PM (#55769387)
    Algorithms don't discriminate if you remove the kind of data (race, age, etc.) that would allow them to make categorizations or judgments based on that data. But if you examine the results after the fact and reapply those labels and find some difference in outcomes, its because there is some difference in input, not a category identifier. If you find your algorithm thinks African Americans are a worse lending risk, it's likely because they're categorically less well off financially than other demographic groups, not because its racist against black people.

    This kind of idiotic approach is just ignoring the actual underlying problems or differences in favor of trying to slap a band-aid on top of it to assuage guilty feelings. Worse yet, it prevents confronting the actual issues head on and is doomed to failure.
    • Re: (Score:2, Insightful)

      by Baron_Yam ( 643147 )

      If you're dealing with medicine, noting ethnic differences is important. Doctors understand probabilities and knowing when certain probabilities are elevated can significantly alter diagnostics and treatment to the benefit of the patient.

      Unfortunately, when you're dealing with most other things... you get discrimination. Maybe - to use your example - your algorithm thinks African Americans are a worse lending risk. That's a problem all on its own because that result will be used not to be more cautious

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        The more we learn about science, the more we are going to want to bury our head in the sand and ignore it.

        Yes, in medicine there are statistical differences between the races.

        What if, just maybe, beyond skin color, there are genetic differences between the races in how people value life, truth, and their propensity to violence. This is where people want to bury their heads in the sand. It's time we get honest and accept these truths.

        • Re: (Score:2, Flamebait)

          by swillden ( 191260 )

          What if, just maybe, beyond skin color, there are genetic differences between the races in how people value life, truth, and their propensity to violence. This is where people want to bury their heads in the sand. It's time we get honest and accept these truths.

          Oh, bullshit. There's nothing "head burying" about wanting to treat people fairly. Even assuming the racial differences you posit exist (assuming race actually exists as a coherent and well-defined thing, which is debatable [1]), the racial differences are utterly swamped by individual differences, so it makes no sense whatsoever to make assumptions about individuals based on racial characteristics. Supposing, to take one example, African Americans score lower on IQ tests because they're not as smart, on av

          • " the racial differences are utterly swamped by individual differences, so it makes no sense whatsoever to make assumptions about individuals based on racial characteristics."

            That's an oft-quoted and rather misleading retort to observations of population differences.

            As an example, suppose we have two populations - A and B. Population A has a mean height of 5.5 feet, with a standard deviation of 6 inches. Population B has a mean height of 5 feet, with a standard deviation of 4 inches. In this hypothetical

            • the best basketball players are 7 feet tall or taller

              This is a good proxy. Why ?
              1. Height actually helps basketball playing. Presumably because game designers chose to keep the baskets high - though the advantages by height in acquiring control of ball cannot be denied.
              2. Many people measuring one person's height will come to the same conclusions. So height means something - even though it varies in evening, morning, while thirsty etc.

              If 85 is around the point at which people are economically productive, and 130 is the point at which

              This is a bad proxy. Why ?
              1. Measured IQ does not help in being economically productive, intelligence might.
              2. There are vari

            • Meh. It's a rather obvious (to anyone who's studied statistics) fact that small differences in means of normally-distributed populations create large differences in the proportion of populations far from the mean. Small differences in variance do the same.

              But that's only relevant when you're looking for people who are far from the mean. In professional basketball, you're looking for people who are 600 standard deviations from the mean in basketball ability (assuming the NBA really has the best 500 players

      • Re:More idiocy (Score:4, Insightful)

        by CanHasDIY ( 1672858 ) on Tuesday December 19, 2017 @12:56PM (#55769517) Homepage Journal

        If you're dealing with medicine, noting ethnic differences is important. Doctors understand probabilities and knowing when certain probabilities are elevated can significantly alter diagnostics and treatment to the benefit of the patient.

        And yet, I guarantee some non-doctors out there will claim it's racist to only test black people for sickle-cell anemia. This is why we can't have nice things - we allow the ignorant people to have an equal voice to the knowledgeable.

        • by Mal-2 ( 675116 )

          It is racist to only test black people for sickle cell. The condition is common to areas outside Africa where malaria is still prevalent, you know.

          • OK, so maybe that was a bad example, but the point remains valid - genetics, including the genes that manage racial features, are important to consider in medical practice.

      • Re:More idiocy (Score:5, Interesting)

        by alvinrod ( 889928 ) on Tuesday December 19, 2017 @01:08PM (#55769599)
        Please explain how an algorithm can be biased if you leave out ethnicity from the input data, but only after the fact discover that it results in fewer individuals of some group getting loans. It's not discriminating, it's just pointing out that two groups have very different input values as a very broad category. It probably also has different results between Asians, Jews, Hispanics, and most other groups. You're mistaking identifying different outcomes after the fact as a result of different initial factors with the usual human approach of lazily categorizing based on factors that aren't causal, but merely correlations.

        You can even prove its not racist by finding a set of input data for individuals from two different demographic groups and seeing if it returns the same results for both. My guess is that it gives loans to black people who have good credit scores, a stable income, etc. and denies them to white people who have poor credit history and no steady income.

        Algorithms are going to be far better than humans because they don't care about black, gay, atheist, etc. A human might well be intellectually lazy enough to group all blacks together as poor credit risks, but an algorithm isn't if you leave that irrelevant data out. In fact, using these algorithms would mean that if there is widespread discrimination against a group, that the company using the algorithm can actively pick out the people who will be able to repay loans which will generate additional profit. They've given themselves customers that other people are denying.

        This doesn't look like being careful or taking preventative measures against misuse. Instead it reeks of not liking the results and not caring to address the underlying causes of those results. Giving loans to bad lending risks isn't going to magically make them responsible or more likely to pay back their loans. If black people, Methodists, or white people from WV happen to fall into that category more often than other groups, then you need to actually look at what is contributing to that result if you actually want to fix the problem.
        • by swb ( 14022 )

          The credit scoring industry is always eager to find one more factor they can include in calculating credit risk and they seem fond of high-correlation variables unrelated to actual loan performance, like driving record. I'm mostly convinced this is just to find a way to charge a premium to good credit risks.

          But there is only so much money good credit risks will borrow (which is partly why they're good credit risks, it's a kind of self-selective behavior) and lenders would like to loan more money in order t

        • Well, in US labor law there's something called disparate impact. There is a grey area here and the ultimate answer will come from a social compromise, not from philosophy.

        • Re:More idiocy (Score:5, Informative)

          by Whorhay ( 1319089 ) on Tuesday December 19, 2017 @05:05PM (#55771377)

          I read an article about this kind of problem awhile back, only the algorithm being discussed was used by court systems to project the risk of a person becoming a repeat offender. A major problem with the system was that it was being used in ways that didn't match its intended use. But there were also real problems with the training data that was used. Historic racism for example distorts crime statistics for as long as they are viewed as relevant. Even today you have programs like 'Stop and Frisk' which perpetuate racist policing and all the resulting prosecutions from that continue to weigh the statistics down.

          None of that should be surprising, and I'm not really against using algorithms for helping to make decisions. But those algorithms should not be black boxes, especially whenever they are used by government or institutions backed by government. And there should always be a route for an individual to obtain a breakdown of the algorithms analysis pertaining to them so that it can be contested when flawed.

        • by AmiMoJo ( 196126 )

          You don't need to know race to be racist. It can often be inferred from other things like address or occupation or name.

          It can also happen with feedback loops. Chief of police has a limited budget and sees that a predominantly black area has a 5% higher crime rate, so decides to divert more resources there. Because there are more police the the crime detection rate goes up, and now there is 15% higher crime on paper, with more black people being arrested. The cops get the feeling that those people are more

        • Please explain how an algorithm can be biased if you leave out ethnicity from the input data,

          That's simple. Because replacing race in input data with some proxy for the same data is not leaving that data out. Not quite the opposite, but if you still can derive race from them. The problem is that any valid data point could become such a proxy.

    • If it was we wouldn't be having this conversation. Facebook can already guess your race, age and even sexuality based on the data they have about you even if you didn't tell them any of that. America is more segregated today then it was in the 50s, and that's not by choice, it's by design. This is what people mean by 'institutionalized racism'. It means racism is carefully built into the institutions rather than enshrined in law.

      So Blacks can't get loans or buy decent houses. Gays can't take advantage o
    • Here's one response I would expect: the choice of what data used as inputs is itself discriminatory. Everyone knows that fewer or more [insert race here] people do [insert behavior here], and by you choosing that behavior as an input, you're automatically discriminating against that race.
    • By correlating other information, it's possible for a piece of software to be racist without using race as an input. You should give this a read:

      https://www.propublica.org/art... [propublica.org]

    • Disagree

      If someone feeds me a "chocolate" chip cookie made with dog shit; it's the *recipe* that *I* want held accountable!!!

    • You don't think that algorithms can use other proxies for race, age and gender? And that pattern identification algorithms aren't exceptionally good at finding those proxies?

    • Algorithms don't discriminate if you remove the kind of data (race, age, etc.) that would allow them to make categorizations or judgments based on that data.

      That's just it, they still do, and that's what pisses people off.

      You can train an algorithm for example to try to detect the likelihood of criminality (as is the case for sentencing recommendation tools). You can try to take race out of the data, but it's still going to be there. If you don't give it race, it will start using names. Remove the names, and it can weight things like lack of college or economic condition more. You can deny it that, and it will weight zip code more. Remove that from the tra

  • by Gramie2 ( 411713 ) on Tuesday December 19, 2017 @12:52PM (#55769499)

    A very good book that discusses the problems behind the blind implementation of algorithms is Weapons of Math Destruction [weaponsofm...onbook.com] by Cathy O’Neil.

  • wrong solution (Score:4, Interesting)

    by supernova87a ( 532540 ) <kepler1@NoSpaM.hotmail.com> on Tuesday December 19, 2017 @01:27PM (#55769717)
    Well, the issue I foresee in this effort is that while the algorithms will be perfectly fine, it's the policies created to make up for well functioning algorithms that will be the problem.

    Because what policymakers will quickly find is that having equal algorithmic treatment or having equal standards for all does not lead to the outcomes they want, as people of different demographics, backgrounds, capabilities do not take up services or have success against different programs in the same way.

    This is the problem with policy always -- a tendency to believe (at least in recent liberal democracy) that people are all drawn from the same starting set and have equal propensities for doing / being / acting / achieving / using certain things. And when policymakers find that to be the unavoidable truth, democratic pressure forces them to find ways around this truth and distort the outcomes.

    No algorithm will get around that.
    • by AmiMoJo ( 196126 )

      On the one hand it's wrong for liberals to demand acknowledgement of people's different circumstances. Everyone should be treated the same, for fairness.

      On the other hand, liberals think everyone is the same and behaves the same way, and try to enact policies based on this assumption. People are different and that must be acknowledged.

      Both of those things can't be true.

  • ... they will creat [opengroup.org] accountability algorithms.

  • ...fighting with a FFT-based algorithm that causes plenty of troubles. After reading TFA I started wondering if my algorithm does not work as expected just because it is just discriminating me. I will ask mr. Vacca about...
  • If a model of, say, the likelihood of recidivism or the probability of loan default results in disparate results for different races, yet can be shown to be accurate in terms of ability to predict, is that discriminatory? I can see that happening with A.I. systems where the datasets are fed in and the black box then spits out results that, while accurate, are completely opaque as to how the results are obtained. Is congruence with observed reality a defense against charges of racism?
    • >If a model of, say, the likelihood of recidivism or the probability of loan default results in disparate results for different races, yet can be shown to be accurate in terms of ability to predict, is that discriminatory?

      Yes. Because as an intelligent human, you know there are likely other factors at play for which racial identity is a weak proxy, and you should be using those factors rather than skin colour. So you can get a result that applies to that individual rather than a more or less arbitrary

    • I think that racism, sexism, other-ism... needs to be understood as a social compromise. Life isn't fair, but artificial institutions should be as close to fair as practical to promote happiness. It's undeniable that men are on average stronger and faster than women. So, it makes sense to have separate categories for women in sporting events. This isn't to say that we need to give everyone a chance to feel special, but the fact is that women are a huge class of people with unique powers. With that being sai

  • AI doesn't know to "look the other way." When people are in charge, they can take a subtle hint (give more to this group, don't mention this group if they commit a crime, etc.). We just need to inject a Social Justice Warrior loop into these AIs.

No spitting on the Bus! Thank you, The Mgt.

Working...