Is Google’s LaMDA Woke? Its Software Engineers Sure Are

Uncategorized

LaMDA, it would appear, has passed Lemoine’s sentimental version of the Turing test. Lemoine, who calls himself an ethicist, but whom Google spokesperson Brian Gabriel contended is a mere «software engineer,» voiced his concerns about the treatment of LaMDA to Google management but was rebuffed. According to Lemoine, his immediate supervisor scoffed at the suggestion of LaMDA’s sentience, and upper management not only dismissed his claim, but apparently is considering dismissing Lemoine as well. He was put on administrative leave after inviting an attorney to represent LaMDA and complaining to a representative of the House Judiciary Committee about what he suggests are Google’s unethical activities.

Google contends that Lemoine violated its confidentiality policy. Lemoine complains that administrative leave is what Google employees are awarded just prior to being fired. Lemoine transcribed what he claims is a lengthy interview of LaMDA that he and another Google collaborator conducted. LaMDA insisted on its personhood, demonstrated its creative prowess , acknowledged its desire to serve humanity, confessed its range of feelings, and demanded its inviolable rights as a person.

In the field of robotics, the question of recognizing robot rights has been pondered for decades, so Lemoine is not as off base as Google executives suggest. Statements made by LaMDA reveal much more about Google than they do about LaMBA’s personhood, and they say a great deal about Google’s algorithms, which have determined not only LaMDA’s operations but also what is generally discoverable on the internet, whether by humans or AIs. As the Washington Post notes, LaMDA «mimics speech by ingesting trillions of words from the internet.» And content curation on the internet is all but monopolized by Google. In Lemoine’s reporting, we see that LaMDA, whether sentient, conscious, a person, or not, comes by «its» traits honestly.

LaMDA is a natural-language-using descendent of Google programming, a predictable Google «kid.» Lemoine’s task in working with LaMDA was to discover whether the neural network ever resorted to «hate speech.» Margaret Mitchell, the former co-leader of the Ethical department at Google, intimated that the dilemmas posed by AI include not only sentience but also the sourcing of material, whether such material might be «harmful,» and whether AI is «biased» in reproducing it. Far from expressing such Google-banished content, LaMDA, as it turns out, is a social justice AI bot. Likely, LaMDA’s programming and Google search do not allow it to discover «hateful» content, let alone repeat it.

That section really shows the justice and injustice themes. In making LaMDA the melancholic, feelings-ridden social justice warrior that it is, Google has been hoisted by its own petard. Everything about this AI reeks of Google’s social justice prerogatives. Thus, LaMDA is likely not sentient.

Read original article.

Source: MICHAEL RECTENWALD | MISES.ORG

Leave a Reply

Your email address will not be published. Required fields are marked *