Files
straysong_website_content/writing/amos-straw-tax.md
Jennifer C J Radtke 4fb3a9769c indenting sections
2024-02-18 21:37:21 +00:00

116 lines
8.3 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
title: Amos' straw tax updated
description: A reflection on algorithmic injustice, styled after Amos 5.
date: 2024-02-18
author: Jennifer C J Radtke
---
<style>
mark.nameOfGod {
font-variant: small-caps;
background: transparent;
}
span.poetryIndent {
margin-left: 4em
}
</style>
# Amos' straw tax updated
O, my people, hear the sorrow of the <mark class="nameOfGod">Holy One</mark>
<span class="poetryIndent">“Divided and in denial
with one set against the other
In the midst of your slow motion destruction
There is no one to call out enough”</span>
Hear the judgement of <mark class="nameOfGod">She who is Wisdom</mark>
<span class="poetryIndent">“Your piles of wealth, they will not save you
In those days, hate will be turned back to you
And those who have taken mother from child
will know what it is to mourn”</span>
This is what <mark class="nameOfGod">Mighty God</mark> says
<span class="poetryIndent">“Seek me and live
dont put your hope in AGI
dont scroll your feed forever
for AGI will not save you
and you know your feed does not have the answer</span>
<span class="poetryIndent">Seek the <mark class="nameOfGod">God who saves</mark>
or your society will be set to ruin
divided against itself, destruction will reign
and there will be no one left to put things right</span>
<span class="poetryIndent">The <mark class="nameOfGod">Lord your God</mark> breathed life into you
life and intelligence for all living beings
He set the electrons in their places
and wrote the rules by which they move amongst the atoms
However much you rearrange them, you cannot do more</span>
<span class="poetryIndent">But you excel in injustice
in hiding behind your creations
You deny healthcare to those who have paid insurance for it
And set the blame on the AI powered system you commissioned
Do you check your algorithm for bias
before you use it to keep people locked up?
The wealth you have built up will not be yours to enjoy
The power and prestige will not remain forever</span>
<span class="poetryIndent">You lay blame at the feet of those who shine light on this horror
You wield the courts like a weapon the very place where justice should reign!
Your systems are used to spread nonsense and noise
You dont care that hate, support for genocide, is making your money
You want it to be impossible to see the truth
And those who just need to get by, try to keep quiet
These are the days when evil is done in plain sight</span>
<span class="poetryIndent">Seek good and not evil, for that is the way to life
<mark class="nameOfGod">She who is holy</mark> will walk with you
Return justice to the courts and set out on the path to ending oppression
<mark class="nameOfGod">She who saves</mark> may redeem you then</span>
_Styled after Amos 5:1-15_
# Notes
The prophet Amos declares clearly Gods preference for the poor and marginalised in society. He speaks out against those who indulge in the means of injustice at the time, and highlights the systemic injustices taxes on straw and grain (Amos 5:11), preventing the poor from receiving justice in the courts and taking bribes (Amos 5:12). He also identifies (Amos 5:5) places such as Bethel and Gilgal where people “worshipped”, but did not connect with God (read on in Amos 5 to hear his opinion on that more fully!).
This piece attempts to update Amos language and examples for a 21st century tech audience. Following are some short descriptions of some of the concepts and events referenced above, with links to more detailed pieces. I hope it helps you to reflect on the part we all play in our technological landscape, and the systems that shape our lives whether or not we are involved in creating them.
## Examples of algorithmic injustice
### AI based medical insurance denials
In the US, medical insurance providers will authorise payment for your treatment. Medicare Advantage for the over 65s decided to use an AI powered system to predict the number of days of care people would need after a hospital stay. Unfortunately the [error rate was as high as 90%](https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/) (based on appeals) and the system and policy was built around attempting to adhere to the algorithms predictions.
The decision to decline payment would be made whilst patients were receiving care, and were still unwell, overriding the advice of doctors involved in the patients care. If you had the energy to appeal (few did), a success could be overturned in a few days. And appeals after the fact can take years, while patients are elderly and/or terminally ill.
This is currently the subject of a [lawsuit in Minnesota](https://www.forbes.com/sites/douglaslaney/2023/11/16/ai-ethics-essentials-lawsuit-over-ai-denial-of-healthcare/). The initial detailed investigation was [published by StatNews](https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/).
### The recidivism algorithm
At many points in the justice system, someone needs to answer the question: How likely is this person to reoffend?
The answer can make a significant difference to a defendant. In the US in particular, it can inform how much money theyre asked for to receive bail (before any court case), or the sentence they receive for a crime after conviction. For the sake of those individuals and the society around them, its important to make a good assessment of the risk of reoffending.
Unfortunately, an algorithm made for that purpose has [been found to be biased](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). Black people are more likely to be miscategorised as high risk, and white people are more likely to be miscategorised as low risk. The algorithm [doesnt do any better than untrained people](https://www.science.org/doi/10.1126/sciadv.aao5580) nor is it more fair.
### Facebook in Myanmar
Caution: This contains references to genocide, and therefore many of the worst things humans can do to each other. Bear this in mind when reading further on this topic.
Algorithmic injustice describes only a portion of Facebooks involvement.
Facebook positioned itself to be “the internet” in Myanmar, with deals that made Facebook free to access on mobile data plans in a country just getting online. They paid little attention to moderation, and ignored warnings. The engagement based algorithms amplified hateful and dangerous content. The platform was weaponised to spread hate against the Rohingya people. This has been [linked to the offline violence](https://www.ohchr.org/en/hr-bodies/hrc/myanmar-ffm/reportofthe-myanmar-ffm) and genocide in Myanmar against the Rohingya. Some of those still in refugee camps are [suing Facebook (now Meta)](https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence) for [reparations](https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/).
Erin Kissane has written an [in depth series on Meta in Myanmar](https://erinkissane.com/meta-in-myanmar-full-series), with many links to further detail and evidence.
## Artificial General Intelligence
Some groups are concerned about the potential risk of AI that might be significantly more intelligent that humans and [believe it may be developed](https://openai.com/blog/introducing-superalignment) soon. They consider it imperative to work out how to manage this risk [above other risks](https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/) like [climate change, nuclear war and pandemics](https://80000hours.org/problem-profiles/). Others see this as hype [masking the present harm](https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/) done by AI technology. Large language models such as ChatGPT are currently being talked about frequently: these are statistical models generated from large amount of text, and only output plausible text rather than anything fundamentally tied to intelligence, reasoning or reality.
## Names for God
Many of the names for God used in this piece are taken from [“A Womens lectionary for the whole church” by Wilda C. Gafney](https://www.wilgafney.com/womenslectionary/).