• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Do people still vilify machines for the same reasons?

Evo

Unapologetic being
Joined
Jul 1, 2011
Messages
3,160
MBTI Type
XNTJ
Enneagram
1w9
Instinctual Variant
sp/sx
It almost seems as if people are not as concerned with vilifying AI for the same reasons as "theyre going to get too powerful," (as in your classic sci fi movie or something) but also because they're going to start taking ( and have) real jobs away from people.

Link: Soon We Won’t Program Computers. We’ll Train Them Like Dogs | WIRED

I quoted some of the paragraphs but...


Tl:dr: just read the bold



“If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)

But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.

This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces...

...Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.

Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.


But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.

If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.

“People don’t linearly write the programs,” Rubin says. “After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.” When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world.

I'm finding this topic fascinating, so really any thoughts are welcome.
 

Doctor Cringelord

Well-known member
Joined
Aug 27, 2013
Messages
20,567
MBTI Type
I
Enneagram
9w8
Instinctual Variant
sp/sx
I don't think it's AI people should be fearing, but perhaps artificial consciousness.
 

Evo

Unapologetic being
Joined
Jul 1, 2011
Messages
3,160
MBTI Type
XNTJ
Enneagram
1w9
Instinctual Variant
sp/sx
I don't think it's AI people should be fearing, but perhaps artificial consciousness.

Yea, I could see that.

Ive tagged my friends on facebook pics before, and I noticed their names automatically popped up. My initial reaction was that I was creeped out a bit.
 

Tater

New member
Joined
Jul 26, 2014
Messages
2,421
It almost seems as if people are not as concerned with vilifying AI for the same reasons as "theyre going to get too powerful," (as in your classic sci fi movie or something) but also because they're going to start taking ( and have) real jobs away from people.

Link: Soon We Won’t Program Computers. We’ll Train Them Like Dogs | WIRED

I quoted some of the paragraphs but...


Tl:dr: just read the bold





I'm finding this topic fascinating, so really any thoughts are welcome.

for the time being, ai doesn't post a serious threat to humanity because current ai programs can only perform a very narrow set of instructions. they can outperform humans only in fields they were designed to master. human beings have an advantage in that they have general intelligence, allowing them to adapt and fall back on a host of other strategies.

as anaximander suggested, a program would become more threatening at the stage in which it reaches consciousness. additionally, people rely so much on smart devices that our current standard of living would be exceedingly vulnerable to machine learning that focuses on network exploitation.

as it stands, the judicial system has enough difficulty keeping up with more basic forms of technology. when/if applications with machine learning start to infringe on our means of survival, they will have a significant window of opportunity. for instance, the law would trust them more than their designers because automated processes are considered to produce solid forms of evidence. after the first known incident, courts wouldn't want to deal with them and they would be outlawed.

moreover, governments tend to suppress the advancement of more powerful technologies in the private sector due the risks involved. they don't want their enemies to get a hold of 'cyber weapons', nor do they want civilians developing subversive tools. engineers working for/in the public sector are likely a few steps ahead here.

so, to sum up my thoughts, i'm not currently terrified of ai progress, but engineers would do us all a favor by keeping application purposes separate. engineers in the public sector have already caused a lot of fallout with complex malware.
 

Vasilisa

Symbolic Herald
Joined
Feb 2, 2010
Messages
3,946
Instinctual Variant
so/sx
goodbye horses

Will Humans Go the Way of Horses?
Labor in the Second Machine Age
By Erik Brynjolfsson and Andrew McAfee
6 June 2015
Foreign Affairs

The debate over what technology does to work, jobs, and wages is as old as the industrial era itself. In the second decade of the nineteenth century, a group of English textile workers called the Luddites protested the introduction of spinning frames and power looms, machines of the nascent Industrial Revolution that threatened to leave them without jobs. Since then, each new burst of technological progress has brought with it another wave of concern about a possible mass displacement of labor.

On one side of the debate are those who believe that new technologies are likely to replace workers. Karl Marx, writing during the age of steam, described the automation of the proletariat as a necessary feature of capitalism. In 1930, after electrification and the internal combustion engine had taken off, John Maynard Keynes predicted that such innovations would lead to an increase in material prosperity but also to widespread “technological unemployment.” At the dawn of the computer era, in 1964, a group of scientists and social theorists sent an open letter to U.S. President Lyndon Johnson warning that cybernation “results in a system of almost unlimited productive capacity, which requires progressively less human labor.” Recently, we and others have argued that as digital technologies race ahead, they have the potential to leave many workers behind.

On the other side are those who say that workers will be just fine. They have history on their side: real wages and the number of jobs have increased relatively steadily throughout the industrialized world since the middle of the nineteenth century, even as technology advanced like never before. A 1987 National Academy of Sciences report explained why:

By reducing the costs of production and thereby lowering the price of a particular good in a competitive market, technological change frequently leads to increases in output demand: greater output demand results in increased production, which requires more labor.​

This view has gained enough traction in mainstream economics that the contrary belief—that technological progress might reduce human employment—has been dismissed as the “lump of labor fallacy.” It’s a fallacy, the argument goes, because there is no static “lump of labor,” since the amount of work available to be done can increase without bound.

In 1983, the Nobel Prize–winning economist Wassily Leontief brought the debate into sharp relief through a clever comparison of humans and horses. For many decades, horse labor appeared impervious to technological change. Even as the telegraph supplanted the Pony Express and railroads replaced the stagecoach and the Conestoga wagon, the U.S. equine population grew seemingly without end, increasing sixfold between 1840 and 1900 to more than 21 million horses and mules. The animals were vital not only on farms but also in the country’s rapidly growing urban centers, where they carried goods and people on hackney carriages and horse-drawn omnibuses.

But then, with the introduction and spread of the internal combustion engine, the trend rapidly reversed. As engines found their way into automobiles in the city and tractors in the countryside, horses became largely irrelevant. By 1960, the United States counted just three million horses, a decline of nearly 88 percent in just over half a century. If there had been a debate in the early 1900s about the fate of the horse in the face of new industrial technologies, someone might have formulated a “lump of equine labor fallacy,” based on the animal’s resilience up till then. But the fallacy itself would soon be proved false: once the right technology came along, most horses were doomed as labor.

Is a similar tipping point possible for human labor? Are autonomous vehicles, self-service kiosks, warehouse robots, and supercomputers the harbingers of a wave of technological progress that will finally sweep humans out of the economy? For Leontief, the answer was yes: “The role of humans as the most important factor of production is bound to diminish in the same way that the role of horses . . . was first diminished and then eliminated.”

But humans, fortunately, are not horses, and Leontief missed a number of important differences between them. Many of these suggest that humans will remain an important part of the economy. Even if human labor becomes far less necessary overall, however, people, unlike horses, can choose to prevent themselves from becoming economically irrelevant.


 
Top