Among his jobs, senior software engineer Blake Lemoine signed up to test Google’s recent artificial intelligence (AI) tool called LaMDA (Language Model for Dialog Applications), announced in May of last year. The system makes use of already known information about a subject to “enrich” the conversation in a natural way, keeping it always “open”. Your language processing is capable of understanding hidden meanings or ambiguity in a human response.
Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems.
In his conversations with LaMDA, the 41-year-old engineer analyzed various conditions, including religious themes and whether artificial intelligence used discriminatory or hateful speech. Lemoine ended up having the perception that the LaMDA was sentient, that is, endowed with sensations or impressions of its own.
The engineer debated with LaMDA about the third Law of Robotics, devised by Isaac Asimov, which states that robots must protect their own existence – and which the engineer has always understood as a basis for building mechanical slaves. Just to better illustrate what we’re talking about, here are the three laws (and Law Zero):
When answering that a butler is paid, the engineer got the answer from LaMDA that the system did not need money, “because it was an artificial intelligence”. And it was precisely this level of self-awareness about his own needs that caught Lemoine’s attention.
Their findings were presented to Google. But the company’s vice president, Blaise Aguera y Arcas, and the head of Responsible Innovation, Jen Gennai, rejected their claims. Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, “the evidence does not support his claims.”
“While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,” said Gabriel.
Lemoine has been placed on paid administrative leave from his duties as a researcher in the Responsible AI division (focused on responsible technology in artificial intelligence at Google). In an official note, the senior software engineer said the company alleges violation of its confidentiality policies.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
Lemoine is not the only one with this impression that AI models are not far from achieving an awareness of their own, or of the risks involved in developments in this direction. Margaret Mitchell, former head of ethics in artificial intelligence at Google, even stresses the need for data transparency from input to output of a system “not just for sentience issues, but also bias and behavior”.
The expert’s history with Google reached an important point early last year, when Mitchell was fired from the company, a month after being investigated for improperly sharing information. At the time, the researcher had also protested against Google after the firing of ethics researcher in artificial intelligence, Timnit Gebru.
Mitchell was also very considerate of Lemoine. When new people joined Google, she would introduce them to the engineer, calling him “Google conscience” for having “the heart and soul to do the right thing”. But for all of Lemoine’s amazement at Google’s natural conversational system (which even motivated him to produce a doc with some of his conversations with LaMBDA), Mitchell saw things differently.
The AI ethicist read an abbreviated version of Lemoine’s document and saw a computer program, not a person. “Our minds are very, very good at constructing realities that are not necessarily true to the larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to be increasingly affected by the illusion.”
In turn, Lemoine said that people have the right to shape technology that can significantly affect their lives. “I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be making all the choices.”
‘; } else if (playerDef.getAttribute(‘data-type’) == ‘iframe’){ target.innerHTML = ”; } else { target.innerHTML = ‘”‘; } } }; if (window.OD_bodyAlreadyLoaded===true) OD.bodyLoaded(); document.addEventListener(‘click’, OD.linkTargetPopup, true);
//////////////////////////////////////////////// // OpenWeb //////////////////////////////////////////////// window.OD.addOnLoad(function(){ // Desabilitado return; // Apenas matérias if (!OD.postID) return; // Callbacks window.openWeb_onNewUnseen = function(count) { var e = document.getElementById(‘openWebBellBadge’); if (e){e.innerText = count; e.setAttribute(‘data-unseen’, (count>0 ? ‘yes’ : ‘no’));} } window.openWeb_openNotifications = function(){ var payload = {postId: OD.postID, instanceId: “EmbeddedNotifications”}; if (typeof window.__OW_OPEN_NOTIFICATIONS__ === ‘function’) { window.__OW_OPEN_NOTIFICATIONS__(payload); window.openWeb_onNewUnseen(0); return; } document.addEventListener(‘ow-notifications-sdk-ready’, function() { window.__OW_OPEN_NOTIFICATIONS__(payload); window.openWeb_onNewUnseen(0); }); }; // Init if (typeof window.__OW_SUBSCRIBE_TO_NOTIFICATIONS__ === ‘function’) { window.__OW_SUBSCRIBE_TO_NOTIFICATIONS__(OD.postID, window.openWeb_onNewUnseen); return; } document.addEventListener(‘ow-notifications-sdk-ready’, function(){ window.__OW_SUBSCRIBE_TO_NOTIFICATIONS__(OD.postID, window.openWeb_onNewUnseen); }); });
source: Play Crazy Game