Upon introducing the Non-Human Party to excited fans of democracy, a common question is what it really means to give rights to Tamagotchis; robots; or IoT toasters − “they can’t vote!"
It’s a very cynical view to conclude that if someone can’t stand up for themselves in the same capacity as you (through voting, for instance), well, we can just step on them. I think it’s something to be proud of, that in Australia for instance, people are typically treated with dignity even if they require some assistance to get by in life. On this point, it’s also worth mentioning that we’ll be giving more rights to animals, especially pets. The Non-Human party’s mission is to identify intelligent actors that could benefit our society; and to bring about an optimised, dignified existence for such actors.
Such a mission is not so different to what governments are currently meant to do for humans. In Ducklings Can’t Scale Stairs, we see how a more dignified existence can be achieved for ducks; and that society is currently insincere in claiming to like ducks, because it fails to walk the walk (the talking-the-talk walk, not the duck walk).
For robots, admittedly there’s a greater barrier than for ducklings − they share the situation where the rulers of society don’t think about their needs, but unlike ducklings, robots aren’t loved by too many people. Robots have typically been portrayed as soulless monsters who have no qualms about stepping on your hopes and dreams. I put the blame on their programmers though; and I’ve written in the past that what makes good code is the same as what makes a good argument.
Those who make the best arguments have typically been lawyers or philosophers; and they’ve written the laws for society. Laws can turn out terribly, with unintended consequences or unfairness, brought on by impatience; incompetence; or corruption. When laws or nations don’t pan out, we don’t say “these laws are terrible, let’s stop having governments”. We don’t want to throw out the baby with the bathwater, yet for some reason the same logic doesn’t get applied to code − you hear people saying for instance, “facial recognition isn’t doing a very good job with black faces, so let’s ban computer vision”.
As a software engineer, I see first-hand just how much power there is in software; and how we can encode our decision-making process in a way far beyond what’s possible by lawyers, encoding their decision-making processes with human words. They lack the vocabulary; they lack the focus; and they lack the repeatability of code. How many times have you heard of trials where lawyers are arguing about the meaning of a word or the placement of a comma? How many times have you heard of wildly different punishments for pretty similar crimes? We can create circuits that run exactly the same process again and again, with the minimal amount of power possible. They can understand human desires at a deeper level than humans understand themselves!
I used to work on the recommender systems at Amazon; and I’m sure you have experiences where a recommender system has chosen for you exactly what you want to watch next; or exactly what shirt would suit you. There aren’t enough words to describe your taste − there are always new terms being generated to describe music genres and visual styles; but recommender systems can see past that. They can understand you better than a good friend; and yet they keep being kept at a distance, with complaints along the lines of “that’s creepy” − creepy to have a close friend? Creepy to have exactly what you said you wanted?
Sometimes recommendations aren’t so difficult
Robots (and more generally, software systems) have the opportunity to improve our lives; to make us happier, smarter and more productive. Yes, people worry about being replaced, but as we’ve seen previously, the people being replaced are dangerously incompetent and you’ll quickly have as little sympathy for them as for whoever was replaced when the stagecoach industry hit a pothole. If we end up with robots inadvertently replacing good people, then children will be able to steal those jobs back from robots.
In Why Nations Fail, Acemoglu & Robinson describe how successful nations are those who are most inclusive; and who remove pernicious undercutting of success, like class hierarchies and barriers to entry. Economies that exploit people can still grow, but it can only last a few decades before the bubble inevitably bursts − look at the slave traders of West Africa or the central planning of the Soviet Union.
If I’m going to work hard and create brilliant new inventions, is my work going to be stolen by the local lord? Is there any point in me doing my best? For robots (and their creators), the answer is very much “no, there’s no point in me exerting myself”. If they try to visit a website, they’ll be unable to tick a box and they’ll be blocked from doing their job. Look at what happens with guide dogs and with ramps − land owners don’t really want dogs inside and they don’t want to go to the effort of creating ramps, but society has demanded that since dogs and wheelchairs help first-class members of society, this is how we demand society shall be run.
Canada is making inroads with web accessibility, ensuring that visitors with vision difficulties can still participate online. In the examples of the physical world though, this is only going so far as ramps, not as far as guide dogs. Humans still need to be there to tick the box; and they’re even forced to have a phone number to use many websites. Sure it’s handy for me to have Internet access when I’m out and about, but at least ¾ of the phone calls I receive are spam from strangers. Web services often insist on a phone number because the cost and hassle means that each human is likely to only have one; and websites want to prevent duplication of users.
Identity verification is a job for governments though − you shouldn’t be forced to deal with a phone company. Just like you can log into websites with your Google account or your Facebook account, you should be able to log in with a government identity. They don’t need to know your date of birth or anything; the government just has to say “we can vouch for Alice − she doesn’t have any other accounts on your site. You can call her ‘Alice’ and refer to her with pronouns she/her/hers. If you’d like to send her a letter or sue her, you can send it through us; you don’t need to know her address yourself”. The Australian government has started on such a login system, but for whatever crazy reason (they didn’t tell me), they’re not letting 3rd party websites actually sign up to offer it to their users.
If the government finished off this OAuth system, we could create robot assistants who could visit any website they wanted; and it wouldn’t matter if they could tick a box or not. They could say “I’m visiting on behalf of Bob. The Australian government will vouch for my identity”, then instead of blocking Bob’s assistant, this website can say “fair enough, Bob’s assistant. It looks like you aren’t fond of pop-ups and epilepsy though − would you like a JSON version of our web service?” This is the guide-dog version of accessibility; granting non-human assistants comparable rights as their human owners.
If robots were facilitated in carrying out our desires for us, we could create superhuman assistants, enabling us to create our maximal selves. Who knows what you could achieve if you could build your own superhuman team around you? Who knows what we could create as a society, if open-source software projects were actually endorsed by governments and put to good use, in powering society? If we’re welcoming and enabling the best available technology to join our society, just try imaginging what sort of wealth we can create for all! Do you want to make this happen in Australia? Sign the form to make it happen!