New York City Moves To Create Accountability For Algorithms (propublica.org) 183
The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. From a report: Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination -- the first measure of its kind in the country. The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers' lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation or citizenship status. The task force's report will also explore how to make these decision-making processes understandable to the public. The bill's sponsor, Council Member James Vacca, said he was inspired by ProPublica's investigation into racially biased algorithms used to assess the criminal risk of defendants. "My ambition here is transparency, as well as accountability," Vacca said.
Mirror (Score:1)
It's often these same people who impose systematic discrimination based on these exact criteria.
Re: (Score:3, Insightful)
So, facts....if inconvenient....are not to be used or trusted?
Hmm...isn't that kinda deleting the purpose?
Re:Mirror (Score:5, Insightful)
Right.
Because apparently, in 2017, math became racist.
Re: (Score:2)
I'm white and most of those don't apply to me (although I do technically live in a suburb), so your assertion that such criteria is inherently racist rings falsely.
Re: (Score:2)
In its majestic equality, the law forbids rich and poor alike to sleep under bridges, beg in the streets and steal loaves of bread.
Re: (Score:2)
non sequitur.
Re: Mirror (Score:2)
An algorithm is torturing me as we speak, they are worth a closer look. But its all about problems not intended by algorithm designers, so where liability lies is confusing. E.g a racially green programmer favors education for hiring, but education is correlated with money, and legacy racism made green families lower income than blue families, so the algorithm picks blue people overlooking green people.
Re: (Score:3)
That shouldn't really matter. If they are looking for educated people and more blue people are educated than green then the organization shouldn't have to worry about hiring less qualified people based on political correctness.
Almost comically, these types of things also come from absolute hypocrites.
If I say "Green people are less educated." I'm attacked for propagating a stereotype, yet the same people levying those attacks will say "You can't hire based on education because green people can't compete."
Re: (Score:2)
Not neccessarily. Depends on who you blame of being "them".
Re: (Score:2)
Re: (Score:2)
But when you u
Re: Mirror (Score:3)
It may be true that people of whatever race are statistically more likely to be involved in crime, but it's not OK to deny an individual a loan on that basis, for instance.
Where does that stop though? Statistically men die earlier than women; is it wrong to charge men more for life insurance? Statistically women cost the medical system more than men; is it wrong to charge women more for health insurance? Statistically men are more likely to be involved in a traffic accident; is it wrong to charge men more for car insurance?
Re: (Score:3)
Re: (Score:2)
You appear to be using "correlation" to mean "causation".
A correlation can't be direct or indirect; it's a mathematical fact reflecting how changes in one variable correspond to changes in another. It says nothing about how or why.
Re: (Score:2)
It depends whether you are looking at correlation or causation. Say, for example, that men not only die earlier than women, they are also more likely to smoke than women. In this case, smoking is the cause of early death, not being a man. So the answer there is to charge smokers more than non-smokers.
Re: (Score:3)
But, if you're going for purely predictive results....what part does "causation" play in this at all?
Re: (Score:2)
What's a bonafide trend? How do you distinguish it from correctly identifying racism/sexism in the training data?
Re: (Score:2)
I"m not an AI expert, far from it....
But I would have to imagine that you could at least start with training data that did NOT list race/sex categories and the just turn it loose and see what it finds on its own?
And look, there ARE differences between the sexes and the races in things. I'm sure if you never told a race and AI studied the NBA vs all other careers...you'd find a lot of trends t
Re: (Score:2)
I'm assuming this is a legitimate question. It's hard to tell because it's a fairly similar argument to what trolls use.
Basically, it seems unambiguous that racism (and sexism, but I'll limit myself to racism) existed in the past. The data fed into the AI will take into account racist decisions by humans. As a plausible example that we can pretend is true for this conversation, black people were given worse mortgage terms that led to more defaults. Therefore, the AI interprets black people as having less
Re: (Score:2)
Well, let's say that you start training the AI with more recent data....and if that still shows, without using race as a factor....that black people still are less likely to pay back loans and default or mortgages, negating the factors of the past, is it still wrong to do so?
What if for the sake of argument (not saying it is true), that black people in general by virtue of data analysis, are more likely to be a credit/loan risk, is that still not basis to see it as a trend that should be considere
Re: (Score:2)
But I would have to imagine that you could at least start with training data that did NOT list race/sex categories and the just turn it loose and see what it finds on its own?
Check out the story behind that whole story about that racist algorithm that decided on jail sentences.
That is what they did. But you don't need sex/race/age data when e.g. income/education/prior convictions data lets you derive sex/race with a 99.9% result.
Re: (Score:2)
This is exactly the line of thought that Sweden followed to cover up the negative effects of mass migration. So, good luck with this, New York.
Re: (Score:2)
Which Sweden? Has to be that Sweden where Trump saw something happening last night and not the northern European country...
At least you can examine an algorithm (Score:3)
Re: (Score:2)
AIs do not have legal accountability. People have legal accountability, no matter what tools they use. Illegal discrimination conducted by scientific-sounding means is still illegal discrimination.
Re: (Score:2)
...even the designers of these AIs can't tell you what they're really doing under the hood...
What a garbage myth. It's simply untrue.
The designers know what the AIs are doing, it's just never been a priority to make such explanations easily accessible. It is now, and so they're doing it; you're badly misinformed if you think the designers of these systems are clueless.
Re: At least you can examine an algorithm (Score:2)
If Microsoft's "Tay" is any indication, teaching AI not to discriminate isn't going to be easy. She started off as a cheerful innocent teenager, and in under 24 hours the internet turned her into a raging Nazi.
Re: (Score:2)
...in under 24 hours the internet turned her into a raging Nazi...
This is just GIGO though. The internet is like 90% garbage content and trolling assholes - WTF did they expect using that as training data?!?!?!
Re: (Score:2)
If someone tells you they know how a neural network makes its decisions, they are lying to you.
Nope, you have been misinformed, even I can tell you that!
Neural networks make their decisions by using gradient descent to segment an N-dimensional hyperspace with N-1 dimensional hyperplanes.
Researchers who know how ANNs work have known this for a long time, and can extract more "human readable" explanations from that understanding - it's just that there's never been an impetus to do so before. There is now, so we see researchers actually providing said info now. For example:
NVidia's ANN for self-drivin [nvidia.com]
Re: (Score:2)
Nope, you have been misinformed, even I can tell you that!
Neural networks make their decisions by using gradient descent to segment an N-dimensional hyperspace with N-1 dimensional hyperplanes.
Well, that is the knowledge how EVERY neural networks is making its decisions.
But you don't know why your trained network weighs that input in neuron #5 in layer 3 with .7 instead of .5. And you can not predict if changing it manually by .1 will make your results slightly worse or screw them up completly. You know that they WILL become worse as your training algorithm has already found a local minimum.
Re: (Score:2)
I was in a presentation on the ethics of AI at a very prominent computer science conference. Several experts stood up during the presentation to make this very point. Neural networks, what people call AI these days, /are/ black boxes and we know no more about how they make decisions than we do about how the brain makes decisions based on the network of synapses that have been trained by inputs from the time of our birth.
Thank you for making this point!
Will
Re: (Score:2)
The algorithms themselves are actually the least important aspect. As I have said before [slashdot.org], even if the algorithms are 100% open and transparent, that means nothing if the data feed into them is poor. If the bank uses an algorithm to determine if it want to lend money to you, how is the data about you collected? Who decided to classify you as a say medium risk person? What criteria did he/she/they use for that? How thorough were he/she/they in gathering decision material? What did he/she/they miss/ignore/mis
If an algorithm does not have race/gender input (Score:1)
Can people figure out how it discriminates against certain race or gender?
Re: (Score:1)
Easy (Score:1)
Re: (Score:1)
I thought IQ discriminates mostly against blacks who tend to vote liberal. In fact, we can't even use IQ tests for things like employment anymore because the blacks just can't compete.
IQ tests. Perfectly valid if they put conservatives in a bad light. Discriminatory, racist, pseudoscience, useless, completely subjective, etc. when they tell us things we'd rather not hear.
But whatever suits your narrative.
So, let's study the problem and see if an effect (Score:2)
Can people figure out how it discriminates against certain race or gender?
The proposal here is to do a study to understand that, yes.
You did notice that this article was about studying the problem to see if there is algorithmic discrimination, right?
However, let me also point out that since the example discussed in the text was about DNA testing, I would point out that race and gender are encoded in DNA, so "does not have race/gender input" is not applicable here.
In other cases, however, yes, it turns out that there can be race and gender encoded into input data even if it is
Re: (Score:2)
You can look at the outputs. It shouldn't be that hard.
You can also look at if any of the inputs are proxies for race/gender.
Re: (Score:2)
More idiocy (Score:5, Insightful)
This kind of idiotic approach is just ignoring the actual underlying problems or differences in favor of trying to slap a band-aid on top of it to assuage guilty feelings. Worse yet, it prevents confronting the actual issues head on and is doomed to failure.
Re: (Score:2, Insightful)
If you're dealing with medicine, noting ethnic differences is important. Doctors understand probabilities and knowing when certain probabilities are elevated can significantly alter diagnostics and treatment to the benefit of the patient.
Unfortunately, when you're dealing with most other things... you get discrimination. Maybe - to use your example - your algorithm thinks African Americans are a worse lending risk. That's a problem all on its own because that result will be used not to be more cautious
Re: (Score:2, Insightful)
The more we learn about science, the more we are going to want to bury our head in the sand and ignore it.
Yes, in medicine there are statistical differences between the races.
What if, just maybe, beyond skin color, there are genetic differences between the races in how people value life, truth, and their propensity to violence. This is where people want to bury their heads in the sand. It's time we get honest and accept these truths.
Re: (Score:2, Flamebait)
What if, just maybe, beyond skin color, there are genetic differences between the races in how people value life, truth, and their propensity to violence. This is where people want to bury their heads in the sand. It's time we get honest and accept these truths.
Oh, bullshit. There's nothing "head burying" about wanting to treat people fairly. Even assuming the racial differences you posit exist (assuming race actually exists as a coherent and well-defined thing, which is debatable [1]), the racial differences are utterly swamped by individual differences, so it makes no sense whatsoever to make assumptions about individuals based on racial characteristics. Supposing, to take one example, African Americans score lower on IQ tests because they're not as smart, on av
Re: (Score:2)
" the racial differences are utterly swamped by individual differences, so it makes no sense whatsoever to make assumptions about individuals based on racial characteristics."
That's an oft-quoted and rather misleading retort to observations of population differences.
As an example, suppose we have two populations - A and B. Population A has a mean height of 5.5 feet, with a standard deviation of 6 inches. Population B has a mean height of 5 feet, with a standard deviation of 4 inches. In this hypothetical
Re: (Score:2)
the best basketball players are 7 feet tall or taller
This is a good proxy. Why ?
1. Height actually helps basketball playing. Presumably because game designers chose to keep the baskets high - though the advantages by height in acquiring control of ball cannot be denied.
2. Many people measuring one person's height will come to the same conclusions. So height means something - even though it varies in evening, morning, while thirsty etc.
If 85 is around the point at which people are economically productive, and 130 is the point at which
This is a bad proxy. Why ?
1. Measured IQ does not help in being economically productive, intelligence might.
2. There are vari
Re: (Score:2)
Meh. It's a rather obvious (to anyone who's studied statistics) fact that small differences in means of normally-distributed populations create large differences in the proportion of populations far from the mean. Small differences in variance do the same.
But that's only relevant when you're looking for people who are far from the mean. In professional basketball, you're looking for people who are 600 standard deviations from the mean in basketball ability (assuming the NBA really has the best 500 players
Re: (Score:2)
Where the hell did you get individual probabilities from?
Just because the average sick leave at Company X is y days per year, you cannot in any credible way predict that Mr Smith, working for Company X, will have exactly y sick days next year. If you believe that you can extrapolate probabilities for an individual out of said person's group memberships (be it employment, sex or race) you have a serious misunderstanding of how statistics work.
Re:More idiocy (Score:4, Insightful)
If you're dealing with medicine, noting ethnic differences is important. Doctors understand probabilities and knowing when certain probabilities are elevated can significantly alter diagnostics and treatment to the benefit of the patient.
And yet, I guarantee some non-doctors out there will claim it's racist to only test black people for sickle-cell anemia. This is why we can't have nice things - we allow the ignorant people to have an equal voice to the knowledgeable.
Re: (Score:3)
It is racist to only test black people for sickle cell. The condition is common to areas outside Africa where malaria is still prevalent, you know.
Re: (Score:2)
OK, so maybe that was a bad example, but the point remains valid - genetics, including the genes that manage racial features, are important to consider in medical practice.
Re:More idiocy (Score:5, Interesting)
You can even prove its not racist by finding a set of input data for individuals from two different demographic groups and seeing if it returns the same results for both. My guess is that it gives loans to black people who have good credit scores, a stable income, etc. and denies them to white people who have poor credit history and no steady income.
Algorithms are going to be far better than humans because they don't care about black, gay, atheist, etc. A human might well be intellectually lazy enough to group all blacks together as poor credit risks, but an algorithm isn't if you leave that irrelevant data out. In fact, using these algorithms would mean that if there is widespread discrimination against a group, that the company using the algorithm can actively pick out the people who will be able to repay loans which will generate additional profit. They've given themselves customers that other people are denying.
This doesn't look like being careful or taking preventative measures against misuse. Instead it reeks of not liking the results and not caring to address the underlying causes of those results. Giving loans to bad lending risks isn't going to magically make them responsible or more likely to pay back their loans. If black people, Methodists, or white people from WV happen to fall into that category more often than other groups, then you need to actually look at what is contributing to that result if you actually want to fix the problem.
Re: (Score:3)
The credit scoring industry is always eager to find one more factor they can include in calculating credit risk and they seem fond of high-correlation variables unrelated to actual loan performance, like driving record. I'm mostly convinced this is just to find a way to charge a premium to good credit risks.
But there is only so much money good credit risks will borrow (which is partly why they're good credit risks, it's a kind of self-selective behavior) and lenders would like to loan more money in order t
Re: (Score:3)
Well, in US labor law there's something called disparate impact. There is a grey area here and the ultimate answer will come from a social compromise, not from philosophy.
Re:More idiocy (Score:5, Informative)
I read an article about this kind of problem awhile back, only the algorithm being discussed was used by court systems to project the risk of a person becoming a repeat offender. A major problem with the system was that it was being used in ways that didn't match its intended use. But there were also real problems with the training data that was used. Historic racism for example distorts crime statistics for as long as they are viewed as relevant. Even today you have programs like 'Stop and Frisk' which perpetuate racist policing and all the resulting prosecutions from that continue to weigh the statistics down.
None of that should be surprising, and I'm not really against using algorithms for helping to make decisions. But those algorithms should not be black boxes, especially whenever they are used by government or institutions backed by government. And there should always be a route for an individual to obtain a breakdown of the algorithms analysis pertaining to them so that it can be contested when flawed.
Re: (Score:2)
You don't need to know race to be racist. It can often be inferred from other things like address or occupation or name.
It can also happen with feedback loops. Chief of police has a limited budget and sees that a predominantly black area has a 5% higher crime rate, so decides to divert more resources there. Because there are more police the the crime detection rate goes up, and now there is 15% higher crime on paper, with more black people being arrested. The cops get the feeling that those people are more
Re: (Score:2)
Please explain how an algorithm can be biased if you leave out ethnicity from the input data,
That's simple. Because replacing race in input data with some proxy for the same data is not leaving that data out. Not quite the opposite, but if you still can derive race from them. The problem is that any valid data point could become such a proxy.
Re: (Score:2)
I like how I literally answered the question (how an algorithm can be effectively biased even if it's not obvious from the inputs, and without taking any sides on whether New York is doing a good or bad thing), and got modded down to Troll.
It's not that simple (Score:2)
So Blacks can't get loans or buy decent houses. Gays can't take advantage o
Re: (Score:2)
Re: (Score:3)
By correlating other information, it's possible for a piece of software to be racist without using race as an input. You should give this a read:
https://www.propublica.org/art... [propublica.org]
Re: (Score:2)
Disagree
If someone feeds me a "chocolate" chip cookie made with dog shit; it's the *recipe* that *I* want held accountable!!!
Re: (Score:2)
You don't think that algorithms can use other proxies for race, age and gender? And that pattern identification algorithms aren't exceptionally good at finding those proxies?
Re: (Score:2)
Algorithms don't discriminate if you remove the kind of data (race, age, etc.) that would allow them to make categorizations or judgments based on that data.
That's just it, they still do, and that's what pisses people off.
You can train an algorithm for example to try to detect the likelihood of criminality (as is the case for sentencing recommendation tools). You can try to take race out of the data, but it's still going to be there. If you don't give it race, it will start using names. Remove the names, and it can weight things like lack of college or economic condition more. You can deny it that, and it will weight zip code more. Remove that from the tra
"Weapons of Math Destruction (Score:4, Informative)
A very good book that discusses the problems behind the blind implementation of algorithms is Weapons of Math Destruction [weaponsofm...onbook.com] by Cathy O’Neil.
wrong solution (Score:4, Interesting)
Because what policymakers will quickly find is that having equal algorithmic treatment or having equal standards for all does not lead to the outcomes they want, as people of different demographics, backgrounds, capabilities do not take up services or have success against different programs in the same way.
This is the problem with policy always -- a tendency to believe (at least in recent liberal democracy) that people are all drawn from the same starting set and have equal propensities for doing / being / acting / achieving / using certain things. And when policymakers find that to be the unavoidable truth, democratic pressure forces them to find ways around this truth and distort the outcomes.
No algorithm will get around that.
Re: (Score:2)
On the one hand it's wrong for liberals to demand acknowledgement of people's different circumstances. Everyone should be treated the same, for fairness.
On the other hand, liberals think everyone is the same and behaves the same way, and try to enact policies based on this assumption. People are different and that must be acknowledged.
Both of those things can't be true.
And for POSIX systems ... (Score:2)
I spent the whole day... (Score:2)
What if the algorithm is provably right? (Score:2)
Re: (Score:2)
>If a model of, say, the likelihood of recidivism or the probability of loan default results in disparate results for different races, yet can be shown to be accurate in terms of ability to predict, is that discriminatory?
Yes. Because as an intelligent human, you know there are likely other factors at play for which racial identity is a weak proxy, and you should be using those factors rather than skin colour. So you can get a result that applies to that individual rather than a more or less arbitrary
Re: (Score:2)
I think that racism, sexism, other-ism... needs to be understood as a social compromise. Life isn't fair, but artificial institutions should be as close to fair as practical to promote happiness. It's undeniable that men are on average stronger and faster than women. So, it makes sense to have separate categories for women in sporting events. This isn't to say that we need to give everyone a chance to feel special, but the fact is that women are a huge class of people with unique powers. With that being sai
Maybe pre-AI was racist? (Score:2)
Re:Now hold Trump accountable for TREASON (Score:4, Insightful)
Republican!
These people are obviously just fakes, making Democrats look unhinged.
But it's believable because the Ds _have_ lost control of their loonies. Unless the Ds check their lunatic fringe, Trump is good for two terms.
Re: (Score:2)
Like I say, the Ds have lost control of their lunatic fringe. Which is what makes this post plausible.
As the lunatic Ds run out of energy, Rs will fill the gap. Just as Ds pretend to be clansmen to push their agenda, Rs pretend to be neoStalinists/Antifa. Which doesn't mean that real clansmen and commies don't exist, just that they aren't THAT stupid...granting some are, like you say Waters.
Re: (Score:2)
Re: (Score:1)
The appropriate phrase is "adhering to their Enemies, giving them Aid and Comfort". You seem to be under the impression that enemies only exist in wartime, and I don't know that that's the case.
Re:Now hold Trump accountable for TREASON (Score:4, Insightful)
Re: (Score:2)
I'm not sure they see it the same way.
Re: (Score:2)
Re: (Score:2)
They see everyone as their enemy, and most of the time they're seeing double.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The US sent troops to Vladivostok, IIRC, during the Russian Civil War. It had at least a small hand in the first collapse (arguably, two collapses in a year).
Re: (Score:2)
Re: (Score:2)
While what you said is true, when we're talking about the feelings of the Russian people we need to know how they perceive things. At at least one time, they mostly believed that the US invaded their country during their civil war.
On a side note, it's interesting to look at the role of Germany in making the Soviet Union the post-WWII threat it was. No other country was anywhere near as useful in helping the Soviet Communists..
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: Now hold Trump accountable for TREASON (Score:2)
Re: (Score:2)
Would you have been saying the same thing if it had been China or India, not Russia, whose intelligence agency aided Trump?
Re: (Score:2, Insightful)
Yeah, it was outrageous when Trump was caught on an open mike promising Medvedev 'more flexibility' after the election. Collusion and treason!
Oh wait, that was Obama [washingtonpost.com]
Re: Now hold Trump accountable for TREASON (Score:1)
Re: (Score:1, Offtopic)
Hmm....I guess you're right, as that there had not been a single mass shooting to date prior to the Vegas shooting that seemed to involve bump stocks.
Hmm...I guess we'd better ban fingers and belt loops and sticks that can emulate that bump stock too.....
Strange, we'd not heard of many crimes involving the bump stocks prior to this, ev
Re: (Score:2)
He was using AR15s, so I'm not sure why he didn't just buy a 3MR trigger. Sure, he'd fire a tad slower, but he'd fire a lot more accurately.
Re: (Score:3)
Oh, I agree.
Thing is...a bump fire stock, by nature of how it works....isn't really that great or reliable if you are trying to move around with your gun.
If you are set up in a sniper area like he was, with multiple weapons fitted with them, to allow cooling and not having to reload as often and being somewhat able to stand stationary while using them, then they are
Re: (Score:2, Offtopic)
There are things Trump can legally do for his Russian buddies, now that he's President, that he couldn't do before. He seems to have colluded with them before.
Citation needed.
Re: (Score:2)
To be more specific, my available evidence (which is limited) suggests to me that Trump colluded with Russia. Part of this is meta-evidence, including people lying when they should have had no need to, which a court can't consider but I can.
Re: (Score:2)
I don't know about him, but my portfolio (one small-cap growth ETF, one large-cap growth ETF, one eurozone large-cap growth ETF, one high dividend ETF, and a few stocks) gained 35%-40% over the year 2017.
Re: (Score:2)
Yeah - mine too. I'm kicking myself for pulling out a bunch of funds of my IRA
Re: (Score:2)