Page 5«..4567..10..»

Category Archives: Transhumanist

A Closer Look at the AI Hype Machine: Who Really Benefits? – Common Dreams

Posted: February 5, 2021 at 9:54 pm

The poet Richard Brautigan said that one day we would all be watched over by "machines of loving grace". It was a nice sentiment at the time. But I surmise Brautigan might have done a quick 180 if he was alive today. He would see how intelligent machines in general and AI in particular were being semi-weaponized or otherwise appropriated for purposes of a new kind of social engineering. He would also likely note how this process is usually positioned as something "good for humanity" in vague ways that never seem to be fully explained.

As both a technologist and a journalist, I find it very difficult to think of transhumanism and what I'll call the New Eugenics as anything less than deeply and literally dehumanizing.

The hits, as they say, just keep on coming. Recently I ran across an article advising recent college graduates looking for jobs that they had better be prepared to have their facial expressions scanned and evaluated by artificial intelligence programs during and after interviews.

An article in the publication "Higher Ed" warned that: "Getting a job increasingly requires going through an interview on an AI platformIf the proprietary technology [used to ] to evaluate the recordings concludes that a candidate does well in matching the demeanor, enthusiasm, facial expressions or word choice of current employees of the company, it recommends the candidate for the next round. If the candidate is judged by the software to be out of step, that candidate is not likely to move on."

If this were happening in China, of course, it would be much less surprising. You don't have to be a Harvard-trained psychiatrist to see that this kind of technology is violating some very basic human boundaries: how we think and feel and our innermost and private thoughts. And you don't have to be a political scientist to see that totalitarian societies are in the business of breaking down these boundaries for purposes of social and political control.

Facial recognition has already been implemented by some law enforcement agencies. Other technology being used for social control starts out in the corporate world and then migrates. Given the melding of corporate and government power that's taken place in the U.S. over the last few decades, what's impermissible in government now can get fully implemented in the corporate world and then in the course of time bleeds over to government use via outsourcing and other mechanisms. It's a nifty little shell game. This was the case with the overt collection of certain types of data on citizens which was expressly forbidden by federal law. The way around it was to have corporations to do the dirty work and then turn around and sell the data to various government entities. Will we see the same thing happen with artificial intelligence and its ability to pry into our lives in unprecedented ways?

There is a kind of quasi-worship of technology as a force majeure in humanity's evolution that puts AI at the center of human existence. This line of thinking is now linked to the principles of transhumanism, a set of values and goals being pushed by Silicon Valley elites. This warped vision of techno-utopianism assures us that sophisticated computers are inherently superior to humans. Implicit in this view is the notion that intelligence (and one kind of intelligence at that) is the most important quality in the vast array of attributes that are the essential qualities of our collective humanity and longstanding cultural legacies.

The corporate PR frontage for these "breakthroughs" is always the same: they will only be used for the highest purposes like getting rid of plastics in the oceans. But still the question remains: who will control or regulate the use of these man-made creatures?

The most hardcore transhumanists believe that our role is simply to step aside and assist in the creation of new life forms made possible by hooking up human brains to computers and the Internet, what they consider to be an evolutionary quantum leap. Unfortunately, people in powerful corporate positions like Ray Kurzweil, Google's Director of Engineering, and Elon Musk, founder of Neuralink, actually believe in these convoluted superhero mythologies. This line of thinking is also beginning to creep into the mainstream thanks to the corporate-driven hype put forth by powerful Silicon Valley companies who are pushing these ideas for profit and to maintain technology's ineluctable "more, better, faster" momentum.

SCROLL TO CONTINUE WITH CONTENT

Get our best delivered to your inbox.

The transhumanist agenda is a runaway freight train, barely mentioned in the mainstream media, but threatening to run over us all. In related "mad science" offshoot, scientists have succeeded in creating the first biological computer-based hybrids called Xenobotswhich the New York Times describes as "programmable organisms" that "live for only about a week". The corporate PR frontage for these "breakthroughs" is always the same: they will only be used for the highest purposes like getting rid of plastics in the oceans. But still the question remains: who will control or regulate the use of these man-made creatures?In the brave new world of building machines that can think and evolve on their own because they combine AI programming with biological programming, we have to ask where all this is headed. If machines are being used to evaluate us for job interviews, then why won't they be eventually used as police officers or judges? (In fact, Singapore is now using robotic dogs to police parks for Covid-related social distancing.)

As both a technologist and a journalist, I find it very difficult to think of transhumanism and what I'll call the New Eugenics as anything less than deeply and literally dehumanizing. In the aftermath of WWII, eugenics used to be widely reviled when Nazi scientists experimented with and so highly valued it. Now it's lauded as cutting edge.There are two ugly flies in this ointment. The first is the question of who directs and controls the AI machines being built. You can make a safe bet that it won't be you, your friends, or your neighbors but rather technocratic elites. The second is the fact that programmers, and their masters, the corporate Lords of Tech, are the least likely candidates to come up with the necessary wisdom to imbue AI with the deeper human qualities necessary to make it anything more than a force used for social and political control in conjunction with mass surveillance and other tools.

Another consideration is: how does politics fit into this picture? In the middle ages, one of the great power shifts that took place was from medieval rulers to the church. In the age of the enlightenment, another shift took place: from the church to the modern state. Now we are experiencing yet another great transition: a shift of power from state and federal political systems to corporations and, by extension, to the global elites that are increasingly exerting great influence on both, the 1 percenters that Bernie Sanders frequently refers to.

When considering the use of any new technology, the question should be asked: who does it ultimately serve? And to what extent are ordinary citizens allowed to express their approval or disapproval of the complex technological regimes being created that we all end up involuntarily depending upon?

These trends have political implications because they have happened in tandem with the neoliberal sleight of hand that began with President Reagan. Gradually anti-democratic policy changes over a period of decades allowed elites to begin the process of transferring public funds to private coffers. This was done under the neoliberal smokescreen of widely touted but socially hollow benefits such as privatization, outsourcing, and deregulation bolstered by nostrums such as "Government must get out of the way to let innovation thrive."

Behind the scenes, the use of advanced technology has played a strong role in enabling this transition but it did so out of the public's watchful eye. Now, it seems abundantly clear that technologies such as 5G, machine learning, and AI will continue to be leveraged by technocratic elites for the purposes of social engineering and economic gain. As Yuval Harari, one of transhumanism's most vocal proponents has stated: "Whoever controls these algorithms will be the real government."

If AI is allowed to begin making decisions that affect our everyday lives in the realms of work, play and business, it's important to be aware of who this technology serves: technologically sophisticated elites. We have been hearing promises for some time about how better advanced computer technology was going to revolutionize our lives by changing just about every aspect of them for the better. But the reality on the ground seems to be quite different than what was advertised. Yes, there are many areas where it can be argued that the use of computer and Internet technology has improved the quality of life. But there are just as many others where it has failed miserably. Healthcare is just one example. Here misguided legislation combined with an obsession with insurance company-mandated data gathering has created massive info-bureaucracies where doctors and nurses spend far too much time feeding patient data into a huge information databases where it often seems to languish. Nurses and other medical professionals have long complained that too much of their time is spent on data gathering and not enough time focusing on healthcare itself and real patient needs.

When considering the use of any new technology, the question should be asked: who does it ultimately serve? And to what extent are ordinary citizens allowed to express their approval or disapproval of the complex technological regimes being created that we all end up involuntarily depending upon? In a second "Gilded Age" where the power of billionaires and elites over our lives is now being widely questioned, what do we do about their ability to radically and undemocratically alter the landscape of our daily lives using the almighty algorithm?

See more here:
A Closer Look at the AI Hype Machine: Who Really Benefits? - Common Dreams

Posted in Transhumanist | Comments Off on A Closer Look at the AI Hype Machine: Who Really Benefits? – Common Dreams

Deadpool’s Monster Army and the X-Men’s Nation Share a Surprising Tactic – Screen Rant

Posted: February 5, 2021 at 9:54 pm

The X-Men are really big into combining their powers now, something Deadpool uses when it's time to smash invading symbiotes.

Having a country seems to be all the rage in comics these days. After all, in addition to traditional mainstays like Black Panther's Wakanda and Doctor Doom's Latveria, the X-Men now have the mutant nation of Krakoa, and Deadpool, of all people, has found himself the monarch of the Monster Nation. With countries come culture, and there seems to be some cultural cross-pollination going on in the pages of Marvel Comics. In Deadpool #10, written by Kelly Thompson and illustrated by Gerardo Sandoval, the Merc with a Mouth seems to have borrowed a page from X-Men to combine the powers of his constituents into a fearsome symbiote-smashing giant robot! But what precedent does this increasing common occurrence set, and what implications does it have going forward?

Combining powers is nothing new in comics. Perhaps is the most famous example is "the Fastball special" which would see Colossus launch Wolverine at their adversaries. However, Krakoa has taken this concept to a whole new level, developing much more intricate - and potentially dangerous - combinations. After all, some mutants wield various elements, controlled by sheer force of will. Any emotional instability could spell disaster. Fortunately, most mutants in Krakoa have an insurance policy in the form of their resurrection through psychic downloads.

Related: Deadpool Just Saved Captain America and Cyclops in the Most Ridiculous Way

On the pages of Deadpool, the titular monarch is using the powers of former enemy Jelby to create a massive gelatinous body to house his team, and then use their individual powers against the symbiotes threatening his nation and the world at large. Jelby also captures Deadpool's pet, Jeff the Landshark, who had been infected by a symbiote, and by the end of the adventure, even helps capture a massive symbiote dragon. Ultimately, the move to combine powers - which Deadpool fittingly refers to as "Plan X" - pays off.

Still, from a storytelling perspective, there are potential pitfalls for power combination. Its possible power combination could become nothing more than a plot device, or worse, a deus ex machina. After all, Krakoa is a blossoming transhumanist state, and it's possible no individual situation poses much of a threat thanks to the sheer number of power combinations at the mutants' disposal now. Ultimately, the story could suffer, especially if the emphasis falls on the "wow factor" of power combination instead of the character dynamics working behind the scenes.

Of course, this new mutant culture could be a way of raising the stakes. After all, would the mutants be so willing to engage in these dynamics if they didn't have resurrection pods? Cheating death typically doesn't end well. If or when Krakoa loses its resurrection capability, mutants could put themselves in considerable danger performing these maneuvers. The comics have already explored how vulnerable clones feel in the face of uncertain resurrection. What if the mutants had to perform these literally death-defying moves without a safety net?

Ultimately, the question is moot in Deadpool's case, as his Monster Nation is shown to be almost everything Krakoa is not - a rag-tag mix of monsters, aliens, villains, and even regular humans working together. If Deadpool can duplicate a key mutant technology without much effort, it's possible Krakoa might not be as innovative - or even stable - as they believe. All of this suggests Krakoa's recent breakthrough might really be leading the mutant nation down a path with very fragile feet of clay.

Next: The Wu-Tang Clan Just Entered Marvel's Fight Against Marvel's King in Black

Superman Confirms One Hero is the Strongest in the Universe

Visit link:
Deadpool's Monster Army and the X-Men's Nation Share a Surprising Tactic - Screen Rant

Posted in Transhumanist | Comments Off on Deadpool’s Monster Army and the X-Men’s Nation Share a Surprising Tactic – Screen Rant

Moral advice straight from the computer: is it time for a virtual Socrates? – Innovation Origins

Posted: September 5, 2020 at 11:55 pm

It is thanks to science and technology that we are living longer and healthier lives. Technology has greatly improved our quality of life. We even have bionic limbs, such as bionic arms, and exoskeletons for patients with full spinal cord injuries. Yet in spite of all this, we are still contending with serious shortcomings. Transhumanist Max More, who was born as Max T. OConnor, wrote a letter to Mother Nature:

Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and diejust as were beginning to attain wisdom. [] You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves!

Transhumanists hope to use genetic modification, synthetic organs, biotechnology, and AI techniques to enhance the human condition. What intrigues me here is the moral enhancement aspect. This deliberately sets out to improve peoples character or behavior. Scientists are seeking ways to improve levels of morality through the use of medication and technology, such as moral neuroenhancement. These so-called neurotechnologies directly change specific cerebral states or neural functions in order to induce beneficial moral improvements. Academics, for example, are exploring the feasibility of increasing empathy with medication. This leads to interesting questions.

We all harbor prejudices as well as a tendency to feel more empathy for people we know or who we can identify with, but what is the right amount of empathy if you want to increase it? If you felt responsible for every other person on the planet, life would most likely become unbearable, as you would be overwhelmed by all this misery. Moral enhancement by means of biotechnology is controversial ethically speaking. A great deal of criticism has been leveled at this, among other things, concerning the limitations of autonomy.

Studies in the medical world indicate that AI systems are at least as good, and sometimes even better, than doctors in diagnosing cancer. For example, researchers have trained deep neural networks with the help of a dataset of around 130,000 clinical images of skin cancer. The results demonstrate that algorithms are on the same level as experienced dermatologists when it comes to predicting skin cancer. While Deep Minds AI even beat doctors in screenings for breast cancer.

Consequently, AI systems are capable of supporting doctors in making diagnoses. But what about AI that would help us make moral decisions? An AI system is conceivably more consistent, impartial, and objective. Although this does depend a lot on the quality of the data that is used to train it. Unlike people, AI can process massive datasets, which may perhaps result in better-informed decisions being made.

We often fail to adequately consider all the information that is needed to make a moral decision, in part due to stress, lack of time, limited scope for information processing, and so on. So, might we eventually turn to a moral AI adviser? A kind of virtual Socrates that asks pertinent questions, points out flaws in our thinking, and ultimately issues moral advice based on input from various databases.

In order to be able to engage in an in-depth, meaningful philosophical dialogue with an AI system, the technical challenges are undoubtedly enormous, not least in the field of Natural Language Processing. The complexity of morality also presents immense challenges, given that it is so context-sensitive and anything but binary. Numerous ethical rules even the ban on homicide depend on the context. Killing in self-defense is evaluated differently in moral and legal terms. Ethics is the grey area, the weighing up process.

And that brings me back to one of the most wonderful quotes on morality. From Aleksandr Solzhenitsyn in The Gulag Archipelago: If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?

Computer says no.

About this column:

In a weekly column, written alternately by Tessie Hartjes, Floris Beemster,Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Jan Wouters, Katleen Gabriels, and Auke Hoekstra, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all theprevious articles in this series.

See the article here:
Moral advice straight from the computer: is it time for a virtual Socrates? - Innovation Origins

Posted in Transhumanist | Comments Off on Moral advice straight from the computer: is it time for a virtual Socrates? – Innovation Origins

Masks Against the Coronavirus: How the Rejection of Mask Use, Unites the Extreme Right and the Extreme Left The Costa Rica News – The Costa Rica News

Posted: September 5, 2020 at 11:55 pm

Shouting freedom and without social distance, more than 2,500 people gathered this Sunday in the center of Madrid to protest against the mandatory use of masks and against what they describe as a false Pandemic of Coronavirus. The protesters held banners that read: The Virus does not exist, Masks kill and We are not afraid. The rally drew a variety of attendees, including conspiracy theorists, libertarians, and anti-vaccines.

The anti-mask militants have one point in common: they believe that the authorities are violating their rights. For the experts, in addition, they have a greater presence among voters on the extreme right or extreme left, due to their distrust of the State or authority in general.

Pilar Martn, a 58-year-old housewife, said that she had come to Madrid from Zaragoza for the demonstration because she believed that governments around the world were exaggerating the number of infections to curb peoples freedoms.

They are forcing us to wear a mask, they want us to stay at home practically locked up. It is obvious that they are continuously deceiving us by talking about outbreaks. It is all a lie, she said during the protest.

Anti-mask groups began to appear in demonstrations against the confinement measures in the United States, and later spread to Germany where a demonstration with far-right parties and far-left movements brought together 15,000 people Canada, the United Kingdom and France.

For the sociologist David Le Breton, the refusal of some to wear the mask is a new sign of growing individualism. The paradox is that the freedom defended by the masks is, in reality, the freedom to contaminate others, Le Breton told journalists. It is the product of civic disengagement, one of the hallmarks of contemporary individualism, he added.

For Tristan Mends France, a specialist in digital cultures, the anti-mask movement is heterogeneous, made up of people who do not have the same concerns or the same discourse against the use of masks. There are supporters of conspiracy theories, regardless of their ideological tone, and people who have an ideological agenda, more linked to the extreme right.

For her part, Jocelyn Raude, professor of social psychology at the School of Higher Studies in Public Health in France, considers that anti-masks are more present among voters on the extreme right and extreme left. There is in this attitude a way of disobeying a government that they do not approve or of expressing a broader relationship of distrust in relation to the State and authority in general.

Among the advocacy groups of Professor Didier Raoult, a French infectologist who has conducted controversial studies on hydroxychloroquine, a drug that according to Raoult would be effective in treating COVID-19, there are countless people against the mandatory use of masks and also against vaccines.

Although hydroxychloroquine has undergone some studies in the context of the Coronavirus outbreak, so far there is no good-quality evidence to show that it is effective against COVID-19, the Pan American Health Organization (PAHO) warned.

The virologist attracted many followers of conspiracy theories. A survey by the Jean-Jaurs Institute on the profile of Raoults followers revealed that 20% of them voted in the last presidential elections of 2017 for Franois Fillon, the candidate of the traditional right (party that ruled the country several times); 18% voted for Jean-Luc Mlnchon, from France Insoumise, the most voted on the extreme left, and 17% opted for Marine Le Pen, candidate from the extreme right.

Several members of the anti-mask groups also reject the effectiveness of the same to contain the spread of the new Coronavirus, and question that they are useless or even supposedly dangerous. Various false information about masks circulates in these groups.

The mask deprives us of most of our oxygen. Therefore, it can kill us, says Maxime Nicolle, a well-known figure in the yellow vest movement, protests that erupted in late 2018 in France, some of them violent , with social demands. Reports that masks can cause death is false, vehemently denied by doctors and researchers.

A part of the anti-mask militants, the most radical, is adept at conspiracy theories, which are most widespread in the far-right media and among those who consider themselves anti-system and anti-vaccines. Many of those theories falsely link Microsoft founder Bill Gates to the Coronavirus. Some accuse him of leading a class of global elites. Others are supposedly leading efforts to depopulate the planet or even trying to implant microchips in people.

When you put on a mask, you become intellectually vulnerable, lose your identity and become an ideal prey for occult and transhumanist powers (movement to transform the human condition through the use of science and technology) who want to destroy you in the name of the new world order, affirms an Internet user of these groups in France. First there are the masks and then the vaccines that will have a nanochip controlled by 5G, says another French activist.

At the Sunday demonstration in Madrid, attendees shouted freedom to demand that the use of masks be voluntary and that they be allowed the right to choose whether or not to receive the possible vaccine for COVID-19. Many of the protesters denied the existence of the Coronavirus and chanted that there are no new outbreaks at a precise moment in which Spain is experiencing a rebound in cases of the worst in Europe.

Originally posted here:
Masks Against the Coronavirus: How the Rejection of Mask Use, Unites the Extreme Right and the Extreme Left The Costa Rica News - The Costa Rica News

Posted in Transhumanist | Comments Off on Masks Against the Coronavirus: How the Rejection of Mask Use, Unites the Extreme Right and the Extreme Left The Costa Rica News – The Costa Rica News

The Honorable Dr. Dale Layman, Founder of Robowatch, LLC, is Recognized as the 2020 Humanitarian of the Year by Top 100 Registry, Inc. – IT News…

Posted: September 4, 2020 at 1:56 am

PR.com2020-09-03

Joliet, IL, September 03, 2020 --(PR.com)-- The Honorable Dr. Dale Pierre Layman, A.S., B.S., M.S., Ed.S., Ph.D. #1, Ph.D. #2, Grand Ph.D. in Medicine, MOIF, FABI, DG, DDG, LPIBA, IOM, AdVMed, AGE, is the Founder and President of Robowatch, L.L.C. (www.robowatch.info.) Robowatch is an international non-profit group aiming to keep a watchful human eye on the fast-moving developments occurring in the fields of robotics, computing, and Artificial Intelligence (A.I.) industries. As the first person in his family to attend college in 1968, he earned an Associate of Science (A.S.) in Life Science from Lake Michigan College. The same year, he won a Michigan Public Junior College Transfer Scholarship to the University of Michigan in Ann Arbor. In 1971, he received an Interdepartmental B.S. with Distinction, in Anthropology - Zoology, from the University of Michigan. From 1971 to 1972, Dr. Layman served as a Histological Technician in the Department of Neuropathology at the University of Michigan Medical School. From 1972 to 1974, he attended the U of M Medical School, Physiology department, and was a Teaching Fellow of Human Physiology. He completed his M.S. in Physiology from the University of Michigan in 1974.

From 1974 to 1975, Dr. Layman served as an Instructor in the Biology Department at Lake Superior State College. In 1975, he became a full-time, permanent Instructor in the Natural Science Department of Joliet Junior College (J.J.C.) and taught Human Anatomy, Physiology, and Medical Terminology to Nursing & Allied Health students. Appointed to the Governing Board of Text & Academic Authors, he authored several textbooks, including but not limited to the Terminology of Anatomy & Physiology and Anatomy Demystified. In 2003, Dr. Layman wrote the Foreword to the Concise Encyclopedia of Robotics, Stan Gibilisco.

As a renowned scholar and book author, Dr. Layman proposed The Faculty Ranking Initiative in the State of Illinois to increase the credibility of faculty members in the States two-year colleges, which will help with research grants or publications. In 1994, the State of Illinois accepted this proposal. J.J.C. adapted the change in 2000, and Dr. Layman taught full-time from 1975 until his retirement in 2007. He returned and taught part-time from 2008 to 2010. Dr. Layman received an Ed.S. (Educational Specialist) in Physiology and Health Science from Ball State University in 1979. Then, in 1986, Dr. Layman received his first Ph.D. from the University of Illinois, in Health and Safety Studies. In 2003, Dr. Layman received a second Ph.D. and a Grand Ph.D. in Medicine, from the Academie Europeenne D Informatisation (A.E.I.) and the World Information Distributed University (WIDU). He is the first American to receive the Grand Doctor of Philosophy in Medicine.

In 1999, Dr. Layman delivered a groundbreaking speech at the National Convention of Text and Academic Authors, Park City, Utah. Here, he first publicly explained his unique concept: Compu-Think, a contraction for computer-like modes or ways of human thinking. This reflects the dire need for humans to develop more computer-like modes or ways of Natural Human thinking. This concept has important practical applications to Human Health and Well-being. In 2000, Dr. Layman gave several major talks and received top-level awards. In May of 2000, he participated in a two-week faculty exchange program with Professor Harrie van Liebergen of the Health Care Division of Koning Willem I College, Netherlands.

In 2001, after attending an open lecture on neural implants at the University of Reading, England, Dr. Layman created Robowatch. The London Diplomatic Academy published several articles about his work, such as Robowatch (2001) and Robowatch 2002: Mankind at the Brink (2002). The article Half-human and half-computer, Andrej Kikelj (2003) discussed the far-flung implications of Dr. Laymans work. Using the base of half-human, half-computer, Dr. Layman coined the name of a new disease, Psychosomatic Technophilic, which translates as an abnormal love or attraction for technology [that replaces] the body and mind. Notably, Dr. Layman was cited several times in the article Transhumanism, (Wikipedia, 2009). Further in 2009, several debates about Transhumanism were published in Wikipedia, and they identified Dr. Layman as an anti-transhumanist who first coined the phrase, Terminator argument.

In 2018, Dr. Layman was featured in the cover of Pro-Files Magazine, 8th Edition, by Marquis Whos Who. He was the Executive Spotlight in Robotics, Computers and Artificial Intelligence, in the 2018 Edition of the Top 101 Industry Experts, by Worldwide Publishing. He also appeared on the cover of the July 2018 issue of T.I.P. (Top Industry Professionals) magazine, the International Association of Top Professionals. Dr. Layman was also the recipient of the prestigious Albert Nelson Marquis Lifetime Achievement Award (2017-2018). Ever a Lifelong Student and taking classes for the past few years at J.J.C., Dr. Layman was recently inducted (2019) to his second formal induction into the worlds largest honor society for community college students, Phi Theta Kappa.

Contact Information:

Top 100 Registry Inc.

David Lerner

855-785-2514

Contact via Email

http://www.top100registry.com

Read the full story here: https://www.pr.com/press-release/820338

Press Release Distributed by PR.com

See original here:
The Honorable Dr. Dale Layman, Founder of Robowatch, LLC, is Recognized as the 2020 Humanitarian of the Year by Top 100 Registry, Inc. - IT News...

Posted in Transhumanist | Comments Off on The Honorable Dr. Dale Layman, Founder of Robowatch, LLC, is Recognized as the 2020 Humanitarian of the Year by Top 100 Registry, Inc. – IT News…

Frontrunner for the VA GOP’s 2021 Gubernatorial Nomination Rallies in Honor of Far-Right Paramilitary Group Member; As Del. Jay Jones Points Out, the…

Posted: July 7, 2020 at 2:45 pm

How did you spend *your* fourth of July holiday? Probably not like the frontrunner for the Virginia GOPs 2021 gubernatorial nomination, Amanda Chase, who was hanging out in Richmond yesterday with her buddies including right-wing extremist groups, a gun club and white supremacists. Chase was also busy opining that Confederate monuments despite all historical evidence to the contrary are NOT symbols of hate. By the way, since the media didnt report this key information, for whatever reason(s), Chases rally yesterday was as she herself posted on Facebook in honor of Duncan Lemp. Who was Duncan Lemp, you ask? Heres the Wikipedia entry on his fatal shooting by police:

On March 12, 2020, Duncan Socrates Lempwas fatally shot at his home inPotomac, Maryland, during ano-knockpolice raidby the Montgomery County Police DepartmentsSWATteam.Lemp was astudentand asoftware developerwho associated himself with the3 Percenters, a far-right paramilitary militia group

Lemp associated himself with the3 Percenters, a far-right paramilitary militia group, and set up websites for other such organizations.He also frequented the4chanandRedditmessage boards, sites popular withinternet trolls.He was a member of theUnited States Transhumanist Party, having joined on September 6, 2019.A week before the raid, Lemp posted a picture of two people armed with rifles onInstagram, with text referring to boogaloo, a term used by theboogaloo movementas coded language for an anticipated war against the government or liberals.

Thats a pretty important piece of information youd think the media would have reported, by the way, butnope, that might take a minute or two of using Google or whatever. And god forbid they actually give their readers the full context of whats going on. Ugh.

Anyway, so what was the reaction from the Virginia GOP to State Senator Chases rally with white supremacists in honor of a former member of a far-right paramilitary militia group? So far, as Del. Jay Jones (D) pointed out a few minutes ago the silence is deafening here. And its not like Virginia Republicans werent tweeting yesterday; see the Virginia GOP Twitter feed, which has tweets on their U.S. Senate candidate, handing out Trump yard signs, etc. But anything on Chase and her white supremacists rally in Richmond yesterday. Nope, nada. Theres also nothing from the VA Senate GOP Twitter feed either on Chases Fourth of July festivities. Cat got the Virginia GOPs tongue? Do these folks actually *approve* of Chases behavior, are they just terrified of her, or both? Or, ultimately, do they realize that if they condemn Chase, theyd have to also condemn Trump and others in their own party, and thats something they cant bring themselves to do?

Go here to read the rest:
Frontrunner for the VA GOP's 2021 Gubernatorial Nomination Rallies in Honor of Far-Right Paramilitary Group Member; As Del. Jay Jones Points Out, the...

Posted in Transhumanist | Comments Off on Frontrunner for the VA GOP’s 2021 Gubernatorial Nomination Rallies in Honor of Far-Right Paramilitary Group Member; As Del. Jay Jones Points Out, the…

Why Humanize? A New Effort to Defend the Unique Dignity of Human Beings – Discovery Institute

Posted: June 3, 2020 at 6:47 pm

Hello. My name is Wesley J. Smith and I am honored to be chairman of Discovery Institutes Center on Human Exceptionalism. I am writing to you here to introduce the CHEs new blog, which we call Humanize. Humanize will complement and supplement the important work of the Center for Science & Culture and its invaluable Evolution News site.

Why did we choose Humanize as the name for the site? The once self-evident truth of human exceptionalism is under intensifying attack, as readers of Evolution News know well. Indeed, one of the tragic trends in thinking about evolution has been to blur the distinction between humans and animals. History warns us not to regard this lightly. Recent documentaries by Discovery Institute Vice President John West, Human Zoos and The Biology of the Second Reich, illuminate the evils that came from this tendency in the past century.

Today, whether it is to denigrate the intrinsic equal dignity of all human beings or the proposed or actual breaching of our human duty to care for the weakest and most vulnerable of our fellow humans, the time is ripe to robustly advocate for the unique dignity and equal moral worth of all human beings.

Our approach will be principled and intellectually rigorous, standing steadfastly for human equality, without being unduly esoteric. For example, we have joined the worlds rising chorus against the forced organ harvesting of Falun Gong practitioners in China, the mass incarceration by that countrys government of Uyghur Muslims, and the establishment of a tyrannical social credit system that deploys powerful technologies such as facial recognition and AI to effectively persecute religious believers and heterodox thinkers with societal excommunication.

Our work is as current as todays headlines. In the current COVID-19 crisis, we have supported medical efforts to limit the spread of the virus, but have also insisted that the pandemic not become a justification to dehumanize and abandon devalued people such as our frail elderly in the name of protecting the public health. In this regard, we are not nave and understand that there are times of emergency when unthinkably difficult choices may have to be made. Thus, at the height of the crisis when it appeared that there might be insufficient medical resources to treat all who needed care we explained the crucial moral distinction between the awful, but sometimes necessary medical act of triage, in which all patients are viewed as equals, while forcefully rejecting utilitarian approaches to rationing care based on ideologies such as the inherently discriminatory and invidious quality of life ethic promoted ubiquitously in bioethics literature.

When it comes to the environment, we enthusiastically endorse the human duty to treat our world responsibly and with proper approaches to conservation and remediation of polluted areas, while rejecting misanthropic approaches that would unduly interfere with human thriving and liberty. For example, a new nature rights movement would declare geological features such as rivers and glaciers to be akin to persons with the right to exist, persist, maintain and regenerate its vital cycles, structure, functions and processes in evolution. These laws have the potential to thwart most enterprises because they permit anyone to sue to defend the supposedly violated rights of nature. Such an approach has the potential to bring economic development to a screeching halt. Alarmingly, nature rights has been endorsed by science journals and the movement has succeeded in having four rivers and two glaciers declared to be rights-bearing entities.

Similarly, Humanize will support the establishment of proper animal welfare standards, while rejecting animal rights. The former concept recognizes the crucial moral distinction between humans and animals, recognizes the propriety of making use of animals for our benefit, while also insisting that animals be treated humanely and with due respect for their ability to experience pain and feel emotions. In contrast, animal rights is an ideology that denies any moral distinction between humans and animals, and that seeks ultimately to prevent all human ownership of animals or their use for any reason. The harm this would cause, for example, to medical research is beyond quantifying.

Humanize will also focus readers attention on bioethical issues and controversies that roil our public discourse. We see assisted suicide/euthanasia as a profound abandonment of those who are most in need of our support and care. We will fight against the ongoing drive to allow infanticide of babies born with disabilities or not wanted by parents, and will resist deconstructing the ethics of organ donation, for example, the proposal to permit vital organs to be harvested as a means of voluntary euthanasia. And we will resist the transhumanist movements attempt to deploy technology to manufacture a post-human species.

In addition to my contributions here, our Research Fellow Tom Shakely will also be a regular writer, bringing with him a youthful energy and understanding of contemporary cultural trends to enliven the discussion.

The Center on Human Exceptionalism reflects Discovery Institutes larger vision of human uniqueness, of purpose, creativity, and innovation, as Discovery President Steven Buri has summarized the Institutes mission. Humanize will thus share the work of Fellows representing other Discovery Institute programs. For example, we will feature John Wests powerful critiques of the threat of a new eugenics, discussed in his book Darwin Day In America: How Our Politics and Culture Have Been Dehumanized in the Name of Science, as well as neurosurgeon Michael Egnors cogent takes on technology, the neurological sciences, and theories of the mind. The latter are points of emphasis for Discoverys Walter Bradley Center for Natural and Artificial Intelligence. Evolution News editor David Klinghoffer, of the Center for Science & Culture, recently contributed a thoughtful reflection on the potential dehumanizing impact of ubiquitous wearing of masks during the pandemic. All of Discoverys programs, an intellectual community serving the public and made possible by our supporters and our readers, are advanced by this exchange of ideas.

We hope that you will subscribe its free and join us in the understanding that the morality of the 21st century will depend on our responding energetically and affirmatively to this simple but profound question: Does every human life have equal moral value simply and merely because it is human?

Image: La Bella Principessa, perhaps by Leonardo da Vinci, via Wikimedia Commons.

See original here:
Why Humanize? A New Effort to Defend the Unique Dignity of Human Beings - Discovery Institute

Posted in Transhumanist | Comments Off on Why Humanize? A New Effort to Defend the Unique Dignity of Human Beings – Discovery Institute

How Britain’s oldest universities are trying to protect humanity from risky A.I. – CNBC

Posted: May 30, 2020 at 3:55 am

University of Oxford

Oli Scarff/Getty Images

Oxford and Cambridge, the oldest universities in Britain and two of the oldest in the world, are keeping a watchful eye on the buzzy field of artificial intelligence (AI), which has been hailed as a technology that will bring about a new industrial revolution and change the world as we know it.

Over the last few years, each of the centuries-old institutions have pumped millions of pounds into researching the possible risks associated with machines of the future.

Clever algorithms can already outperform humans at certain tasks. For example, they can beat the best human players in the world at incredibly complex games like chess and Go, and they're able to spot cancerous tumors in a mammogram far quicker than a human clinician can. Machines can also tell the difference between a cat and a dog, or determine a random person's identity just by looking at a photo of their face. They can also translate languages, drive cars, and keep your home at the right temperature. But generally speaking, they're still nowhere near as smart as the average 7-year-old.

The main issue is that AI can't multitask. For example, a game-playing AI can't yet paint a picture. In other words, AI today is very "narrow" in its intelligence. However, computer scientists at the the likes of Google and Facebook are aiming to make AI more "general" in the years ahead, and that's got some big thinkers deeply concerned.

Nick Bostrom, a 47-year-old Swedish born philosopher and polymath, founded the Future of Humanity Institute (FHI) at the University of Oxford in 2005 to assess how dangerous AI and other potential threats might be to the human species.

In the main foyer of the institute, complex equations beyond most people's comprehension are scribbled on whiteboards next to words like "AI safety" and "AI governance." Pensive students from other departments pop in and out as they go about daily routines.

It's rare to get an interview with Bostrom, a transhumanist who believes that we can and should augment our bodies with technology to help eliminate ageing as a cause of death.

"I'm quite protective about research and thinking time so I'm kind of semi-allergic to scheduling too many meetings," he says.

Tall, skinny and clean shaven, Bostrom has riled some AI researchers with his openness to entertain the idea that one day in the not so distant future, machines will be the top dog on Earth. He doesn't go as far as to say when that day will be, but he thinks that it's potentially close enough for us to be worrying about it.

Swedish philosopher Nick Bostrom is a polymath and the author of "Superintelligence."

The Future of Humanity Institute

If and when machines possess human-level artificial general intelligence, Bostrom thinks they could quickly go on to make themselves even smarter and become superintelligent. At this point, it's anyone's guess what happens next.

The optimist says the superintelligent machines will free up humans from work and allow them to live in some sort of utopia where there's an abundance of everything they could ever desire. The pessimist says they'll decide humans are no longer necessary and wipe them all out.Billionare Elon Musk, who has a complex relationship with AI researchers, recommended Bostrom's book "Superintelligence" on Twitter.

Bostrom's institute has been backed with roughly $20 million since its inception. Around $14 million of that coming from the Open Philanthropy Project, a San Francisco-headquartered research and grant-making foundation. The rest of the money has come from the likes of Musk and the European Research Council.

Located in an unassuming building down a winding road off Oxford's main shopping street, the institute is full of mathematicians, computer scientists, physicians, neuroscientists, philosophers, engineers and political scientists.

Eccentric thinkers from all over the world come here to have conversations over cups of tea about what might lie ahead. "A lot of people have some kind of polymath and they are often interested in more than one field," says Bostrom.

The FHI team has scaled from four people to about 60 people over the years. "In a year, or a year and a half, we will be approaching 100 (people)," says Bostrom. The culture at the institute is a blend of academia, start-up and NGO, according to Bostrom, who says it results in an "interesting creative space of possibilities" where there is "a sense of mission and urgency."

If AI somehow became much more powerful, there are three main ways in which it could end up causing harm, according to Bostrom. They are:

"Each of these categories is a plausible place where things could go wrong," says Bostrom.

With regards to machines turning against humans, Bostrom says that if AI becomes really powerful then "there's a potential risk from the AI itself that it does something different than anybody intended that could then be detrimental."

In terms of humans doing bad things to other humans with AI, there's already a precedent there as humans have used other technological discoveries for the purpose of war or oppression. Just look at the atomic bombings of Hiroshima and Nagasaki, for example. Figuring out how to reduce the risk of this happening with AI is worthwhile, Bostrom says, adding that it's easier said than done.

I think there is now less need to emphasize primarily the downsides of AI.

Asked if he is more or less worried about the arrival of superintelligent machines than he was when his book was published in 2014, Bostrom says the timelines have contracted.

"I think progress has been faster than expected over the last six years with the whole deep learning revolution and everything," he says.

When Bostrom wrote the book, there weren't many people in the world seriously researching the potential dangers of AI. "Now there is this thriving small, but thriving field of AI safety work with a number of groups," he says.

While there's potential for things to go wrong, Bostrom says it's important to remember that there are exciting upsides to AI and he doesn't want to be viewed as the person predicting the end of the world.

"I think there is now less need to emphasize primarily the downsides of AI," he says, stressing that his views on AI are complex and multifaceted.

Bostrom says the aim of FHI is "to apply careful thinking to big picture questions for humanity." The institute is not just looking at the next year or the next 10 years, it's looking at everything in perpetuity.

"AI has been an interest since the beginning and for me, I mean, all the way back to the 90s," says Bostrom. "It is a big focus, you could say obsession almost."

The rise of technology is one of several plausible ways that could cause the "human condition" to change in Bostrom's view. AI is one of those technologies but there are groups at the FHI looking at biosecurity (viruses etc), molecular nanotechnology, surveillance tech, genetics, and biotech (human enhancement).

A scene from 'Ex Machina.'

Source: Universal Pictures | YouTube

When it comes to AI, the FHI has two groups; one does technical work on the AI alignment problem and the other looks at governance issuesthat will arise as machine intelligence becomes increasingly powerful.

The AI alignment group is developing algorithms and trying to figure out how to ensure complex intelligent systems behave as we intend them to behave. That involves aligning them with "human preferences," says Bostrom.

Roughly 66 miles away at the University of Cambridge, academics are also looking at threats to human existence, albeit through a slightly different lens.

Researchers at the Center for the Study of Existential Risk (CSER) are assessing biological weapons, pandemics, and, of course, AI.

We are dedicated to the study and mitigation of risks that could lead to human extinction or civilization collapse.

Centre for the Study of Existential Risk (CSER)

"One of the most active areas of activities has been on AI," said CSER co-founder Lord Martin Rees from his sizable quarters at Trinity College in an earlier interview.

Rees, a renowned cosmologist and astrophysicist who was the president of the prestigious Royal Society from 2005 to 2010, is retired so his CSER role is voluntary, but he remains highly involved.

It's important that any algorithm deciding the fate of human beings can be explained to human beings, according to Rees. "If you are put in prison or deprived of your credit by some algorithm then you are entitled to have an explanation so you can understand. Of course, that's the problem at the moment because the remarkable thing about these algorithms like AlphaGo (Google DeepMind's Go-playing algorithm) is that the creators of the program don't understand how it actually operates. This is a genuine dilemma and they're aware of this."

The idea for CSER was conceived in the summer of 2011 during a conversation in the back of a Copenhagen cab between Cambridge academic Huw Price and Skype co-founder Jaan Tallinn, whose donations account for 7-8% of the center's overall funding and equate to hundreds of thousands of pounds.

"I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer," Price wrote of his taxi ride with Tallinn. "I'd never met anyone who regarded it as such a pressing cause for concern let alone anyone with their feet so firmly on the ground in the software business."

University of Cambridge

Geography Photos/UIG via Getty Images

CSER is studying how AI could be used in warfare, as well as analyzing some of the longer term concerns that people like Bostrom have written about. It is also looking at how AI can turbocharge climate science and agricultural food supply chains.

"We try to look at both the positives and negatives of the technology because our real aim is making the world more secure," says Sen higeartaigh, executive director at CSER and a former colleague of Bostrom's. higeartaigh, who holds a PhD in genomics from Trinity College Dublin, says CSER currently has three joint projects on the go with FHI.

External advisors include Bostrom and Musk, as well as other AI experts like Stuart Russell and DeepMind's Murray Shanahan. The late Stephen Hawking was also an advisor when he was alive.

The Leverhulme Center for the Future of Intelligence (CFI) was opened at Cambridge in 2016 and today it sits in the same building as CSER, a stone's throw from the punting boats on the River Cam. The building isn't the only thing the centers share staff overlap too and there's a lot of research that spans both departments.

Backed with over 10 million from the grant-making Leverhulme Foundation, the center is designed to support "innovative blue skies thinking," according to higeartaigh, its co-developer.

Was there really a need for another one of these research centers? higeartaigh thinks so. "It was becoming clear that there would be, as well as the technical opportunities and challenges, legal topics to explore, economic topics, social science topics," he says.

"How do we make sure that artificial intelligence benefits everyone in a global society? You look at issues like who's involved in the development process? Who is consulted? How does the governance work? How do we make sure that marginalized communities have a voice?"

The aim of CFI is to get computer scientists and machine-learning experts working hand in hand with people from policy, social science, risk and governance, ethics, culture, critical theory and so on. As a result, the center should be able to take a broad view of the range of opportunities and challenges that AI poses to societies.

"By bringing together people who think about these things from different angles, we're able to figure out what might be properly plausible scenarios that are worth trying to mitigate against," said higeartaigh.

Go here to see the original:
How Britain's oldest universities are trying to protect humanity from risky A.I. - CNBC

Posted in Transhumanist | Comments Off on How Britain’s oldest universities are trying to protect humanity from risky A.I. – CNBC

The Proto-Communist Plan to Resurrect Everyone Who Ever Lived – VICE

Posted: April 22, 2020 at 6:46 pm

Is there anything that can be done to escape the death cult we seem trapped in?

One of the more radical visions for how to organize human society begins with a simple goal: lets resurrect everyone who has ever lived. Nikolai Fedorov, a nineteenth-century librarian and Russian Orthodoxy philosopher, went so far as to call this project the common task of humanity, calling for the living to be rejuvenated, the dead to be resurrected, and space to be colonized specifically to house them. From the 1860s to the 1930s, Fedorovs influence was present throughout the culturehe influenced a generation of Marxists ahead of the Russian Revolution, as well as literary writers like Leo Tolstoy and Fyodor Dostoevsky, whose novel, The Brothers Karamazov, directly engaged with Federov's ideas about resurrection.

After his death, Federovs acolytes consolidated his ideas into a single text, A Philosophy of the Common Task, and created Cosmism, the movement based on his anti-death eschatology. Federov left the technical details to those who would someday create the prerequisite technology, but this did not stop his disciples: Alexander Bogdanov, who founded the Bolsheviks with Lenin, was an early pioneer of blood transfusions in hopes of rejuvenating humanity; Konstantin Tsiolkvosky, an astrophysicist who was the progenitor of Russia's space program, sought to colonize space to house the resurrected dead; and Alexander Chizhevsky, a biophysicist who sought to map out the effects of solar activity on Earth life and behavior, thought his research might help design the ideal society for the dead to return to.

The vast majority of cosmists were, by the 1930s, either murdered or purged by Stalin, muting the influence of their ambitious project but also leaving us with an incomplete body of work about what type of society resurrection requires or will result in, and whether that wouldas some cosmists believe nowbring us closer to the liberation of the species. Now, I think it is obvious thatdespite what todays transhumanists might tell youwe are in no position, now or anytime soon, to resurrect anyone let alone bring back to life the untold billions that have existed across human history and past it into the eons before civilizations dawn.

To be clear, I think cosmism is absolute madness, but I also find it fascinating. With an introduction to Cosmism and its implications, maybe we can further explore the arbitrary and calculated parts of our social and political order that prioritize capital instead of humanity, often for sinister ends.

**

What? Who gets resurrected? And how?

At its core, the Common Task calls for the subordination of all social relations, productive forces, and civilization itself to the single-minded goal of achieving immortality for the living and resurrection for the dead. Cosmists see this as a necessarily universal project for either everyone or no one at all. That constraint means that their fundamental overhaul of society must go a step further in securing a place where evil or ill-intentioned people cant hurt anyone, but also where immortality is freely accessible for everyone.

Its hard to imagine how that worldwhere resources are pooled together for this project, where humans cannot hurt one another, and where immortality is freeis compatible with the accumulation and exploitation that sit at the heart of capitalism. The crisis heightened by coronavirus should make painfully clear to us all that, as J.W. Masonan economist at CUNYrecently put it, we have a system organized around the threat of withholding people's subsistence, and it "will deeply resist measures to guarantee it, even when the particular circumstances make that necessary for the survival of the system itself." Universal immortality, already an optimistic vision, simply cannot happen in a system that relies on perpetual commodification.

Take one small front of the original cosmist project: blood transfusions. In the 1920s, after being pushed out of the Bolshevik party, Bogdanov focused on experimenting with blood transfusions to create a rejuvenation process for humans (theres little evidence they do this). He tried and failed to set up blood banks across the Soviet Union for the universal rejuvenation of the public, dying from complications of a transfusion himself. Today, young blood is offered for transfusion by industrious start-ups, largely to wealthy and eccentric clientsmost notably (and allegedly) Peter Thiel.

In a book of conversations on cosmism published in 2017 titled Art Without Death, the first dialogue between Anton Vidokle and Hito Steyerl, living artists and writers in Berlin, drives home this same point. Vidokle tells Steyerl that he believes Death is capital quite literally, because everything we accumulatefood, energy, raw material, etc.these are all products of death. For him, it is no surprise were in a capitalist death cult given that he sees value as created through perpetual acts of extraction or exhaustion.

Steyerl echoes these concerns in the conversation, comparing the resurrected dead to artificial general intelligences (AGIs), which oligarch billionaires warn pose an existential threat to humanity. Both groups anticipate fundamental reorganizations of human society, but capitalists diverge sharply from cosmists in that their reorganization necessitates more extraction, more exhaustion, and more death. In their conversation, Steyerl tells Vidokle:

Within the AGI Debate, several solutions have been suggested: first to program the AGI so it will not harm humans, or, on the alt-right/fascist end of the spectrum, to just accelerate extreme capitalisms tendency to exterminate humans and resurrect rich people as some sort of high-net-worth robot race.

These eugenicist ideas are already being implemented: cryogenics and blood transfusions for the rich get the headlines, but the breakdown of healthcare in particularand sustenance in generalfor poor people is literally shortening the lives of millions ... In the present reactionary backlash, oligarchic and neoreactionary eugenics are in full swing, with few attempts being made to contain or limit the impact on the living. The consequences of this are clear: the focus needs to be on the living first and foremost. Because if we dont sort out societycreate noncapitalist abundance and so forththe dead cannot be resurrected safely (or, by extension, AGI cannot be implemented without exterminating humankind or only preserving its most privileged parts).

One of the major problems of todays transhumanist movement is that we are currently unable to equally distribute even basic life-extension technology such as nutrition, medicine, and medical care. At least initially, transhumanists vision of a world in which people live forever is one in which the rich live forever, using the wealth theyve built by extracting value from the poor. Todays transhumanism exists largely within a capitalist framework, and the countrys foremost transhumanist, Zoltan Istvan, a Libertarian candidate for president, is currently campaigning on a platform that shutdown orders intended to preserve human life during the coronavirus pandemic are overblown and are causing irrevocable damage to the capitalist economy (Istvan has in the past written extensively for Motherboard, and has also in the past advocated for the abolition of money).

Cosmists were clear in explaining what resurrection would look like in their idealized version of society, even though they were thin on what the technological details would be. Some argue we must not only restructure our civilization, but our bodies so that we can acquire regenerative abilities, alter our metabolic activity so food or shelter are optional, and thus overcome the natural, social, sexual, and other limitations of the species as Arseny Zhilyaev puts it in a later conversation within the book.

Zhilyaev also invokes Federovs conception of a universal museum, a radicalized, expanded, and more inclusive version of the museums we have now as the site of resurrection. In our world, the closest example of this universal museum is the digital world which also doubles as an enormous data collector used for anything from commerce to government surveillance. The prospect of being resurrected because of government/corporate surveillance records or Mormon genealogy databases is sinister at best, but Zhilyaevs argumentand the larger one advanced by other cosmistsis that our world is already full of and defined by absurd and oppressive institutions that are hostile to our collective interests, yet still manage to thrive. The options for our digital worlds development have been defined by advertisers, state authorities, telecom companies, deep-pocketed investors, and the likewhat might it look like if we decided to focus instead on literally any other task?

All this brings us to the question of where the immortal and resurrected would go. The answer, for cosmists, is space. In the cosmist vision, space colonization must happen so that we can properly honor our ethical responsibility to take care of the resurrected by housing them on museum planets. If the universal museum looks like a digital world emancipated from the demands of capital returns, then the museum planet is a space saved from the whims of our knock-off Willy Wonkasthe Elon Musks and Jeff Bezos of the world. I am not saying it is a good or fair idea to segregate resurrected dead people to museum planets in space, but this is what cosmists suggested, and its a quainter, more peaceful vision for space than what todays capitalists believe we should do.

For Musk, Mars and other future worlds will become colonies that require space mortgages, are used for resource extraction, or, in some cases, be used as landing spots for the rich once we have completely destroyed the Earth. Bezos, the worlds richest man, says we will have "gigantic chip factories in space where heavy industry is kept off-planet. Beyond Earth, Bezos anticipates humanity will be contained to O'Neill cylinder space colonies. One might stop and consider the fact that while the cosmist vision calls for improving human civilization on Earth before resurrecting the dead and colonizing space, the capitalist vision sees space as the next frontier to colonize and extract stupendous returns fromtrillions of dollars of resource extraction is the goal. Even in space, they cannot imagine humanity without the same growth that demands the sort of material extraction and environmental degradation already despoiling the world. Better to export it to another place (another country, planet, etc.) than fix the underlying system.

Why?

Ostensibly, the why behind cosmism is a belief that we have an ethical responsibility to resurrect the dead, much like we have one to care for the sick or infirm. At a deeper level, however, cosmists not only see noncapitalist abundance as a virtue in of itself, but believe the process of realizing it would offer chances to challenge deep-seated assumptions about humanity that might aid political and cultural forms hostile to the better future cosmists seek.

Vidokle tells Steyerl in their conversation that he sees the path towards resurrection involving expanding the rights of the dead in ways that undermine certain political and cultural forms,

The dead ... dont have any rights in our society: they dont communicate, consume, or vote and so they are not political subjects. Their remains are removed further and further from the cities, where most of the living reside. Culturally, the dead are now largely pathetical comical figures: zombies in movies, he said. Financial capitalism does not care about the dead because they do not produce or consume. Fascism only uses them as a mythical proof of sacrifice. Communism is also indifferent to the dead because only the generation that achieves communism will benefit from it; everyone who died on the way gets nothing.

In another part of their conversation, Steyerl suggests that failing to pursue the cosmist project might cede ground to the right-wing accelerationism already killing millions:

There is another aspect to this: the maintenance and reproduction of life is of course a very gendered technologyand control of this is on a social battleground. Reactionaries try to grab control over lifes production and reproduction by any means: religious, economic, legal, and scientific. This affects womens rights on the one hand, and, on the other, it spawns fantasies of reproduction wrested from female control: in labs, via genetic engineering, etc.

In other words, the failure to imagine and pursue some alternative to this oligarchic project has real-world consequences that not only kill human beings, but undermine the collective agency of the majority of humanity. In order for this narrow minority to rejuvenate and resurrect themselves in a way that preserves their own privilege and power, they will have to sharply curtail the rights and agency of almost every other human being in every other sphere of society.

Elena Shaposhnikova, another artist who appears later in the book, wonders whether the end of deathor the arrival of a project promising to abolish itmight help us better imagine and pursue lives beyond capitalism:

It seems to me that most of us tend to sublimate our current life conditions and all its problems, tragedies, and inequalities, and project this into future scenarios, she said. So while its easy to imagine and represent life in a society without money and with intergalactic travel, the plot invariably defaults to essentialist conflicts of power, heroism, betrayal, revenge, or something along these lines.

In a conversation with Shaposhnikova, Zhilyaev offers that cosmism might help fight the general fear of socialism as he understands it:

According to Marx, or even Lenin, socialism as a goal is associated with something elsewith opportunities of unlimited plurality and playful creativity, wider than those offered by capitalism. ... the universal museum producing eternal life and resurrection for all as the last necessary step for establishing social justice.

In the conversations that this book, cosmism emerges not simply as an ambition to resurrect the dead but to create, for the first time in human history, a civilization committed to egalitarianism and justice. So committed, in fact, that no part of the human experienceincluding deathwould escape the frenzied wake of our restructuring.

Its a nice thought, and something worth thinking about. Ours is not that world but in fact, one that is committed, above all else, to capital accumulation. There will be no resurrection for the deadthere isnt even healthcare for most of the living, after all. Even in the Citadel of Capital, the heart of the World Empire, the belly of the beast, the richest country in human history, most are expected to fend for themselves as massive wealth transfers drain the public treasuries that mightve funded some measure of protection from the pandemic, the economic meltdown, and every disaster lurking just out of sight. And yet, for all our plumage, our death cult still holds true to Adam Smith's observation in The Wealth of Nations: "All for ourselves, and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind."

Go here to see the original:
The Proto-Communist Plan to Resurrect Everyone Who Ever Lived - VICE

Posted in Transhumanist | Comments Off on The Proto-Communist Plan to Resurrect Everyone Who Ever Lived – VICE

Is It Moral To Work For A Tech Giant? – Institute for Ethics and Emerging Technologies

Posted: April 19, 2020 at 11:41 am

I recently read The Great Google Revolt in the New York Times Magazine. The article chronicles the conflict between Google and some of its employees over company practices that some of their employees deem unethical. I found the article interesting because I taught computer ethics for many years and Ive always wanted to do meaningful work. Ive also written about ethics and tech previously in Are Google and Facebook Evil? Irrational Protests Against Google, and How Technology Hijacks Peoples Mindsfrom a Magician and Googles Design Ethicist.

Working for Tech Companies

The tech giants---Apple, Google, Facebook, Microsoft---undoubtedly do things that aren't in the public interest. Think about how Facebook allows the blatant dissemination of falsehoods in political material, a policy that subverts the integrity of the electoral process and undermines social stability. Moreover, much time is wasted on Facebook, YouTube contains a lot of junk, and staring at your Apple phone all day has its downsides. This list could go on.

Of course, not always serving the common good isn't a unique feature of tech companies; other corporations do sinister things too. Oil companies fund climate change denial, thereby increasing the chance of a future environmental catastrophe that threatens the species' survival; tobacco companies systematically suppressed evidence of the lethality of their products for decades, leading to millions of deaths. This list could go on too.

So it's hard to single out tech companies for criticism---especially as a transhumanist. If only science and technology properly applied can save us, and if rich tech companies support important research in artificial intelligence, robotics, nanotechnology, and longevity research, then we need big tech. Furthermore, if the American government won't fund such research, then big tech companies are the only ones who might step up.

I do believe that tech companies have civic responsibilities, but taking such responsibilities seriously depends largely on creating a new economy, since the drive for profit, as opposed to increasing societal good, is a large part of the problem. We need an economic system that doesn't emphasize profit, weaponize disinformation, encourage despoiling the natural environment and climate, and create vast wealth inequality.

But if you have a job at a tech company and you have moral qualms about how they use their technology, then your choices include:

No doubt my readers can imagine other options.

What Work Should We Do and Why?

No matter what you choose remember that we live in a world where money is power. Money can then be used either for either good (Bill Gates, Warren Buffett) or ill (Charles Koch, Sheldon Adelson.) So leaving your job will decrease your ability to do good unless, for example, you can make more money doing something else. The way the system is set up, you just have to have something to be able to give something.

While I am sympathetic to opting out of the system, it is nearly impossible to avoid the global social-economic-political system altogether. No matter what you do or where you go you are enmeshed within it. In addition, if we push our concerns about causing harm to their logical limit, simply living and consuming resources may be morally problematic. Living itself may entail a kind of existential guilt. Afterall what we necessarily consume---food, clothing, shelter---is unavailable to others if we consume them.

I suppose the philosophical problem is, to put it simply, how to do good in an imperfect and sometimes bad world. Unfortunately, I don't think there is any way to live in an imperfect world that isn't (somewhat) complicit in evil. What then should we do? Here is how I answered the question in a previous essay, "Should You Do What You Love?"

So what practical counsel do we give people, in our current time and place, regarding work? Unfortunately, my advice is dull and unremarkable, like so much of the available work. For now, the best recommendation is: do the least objectionable, most satisfying work available given your options. That we cant say more reveals the gap between the real and the ideal, which is itself symptomatic of a flawed society. Perhaps working to change the world so that people can engage in satisfying work is the most meaningful work of all.

Assuming you find work that isn't too objectionable and somewhat satisfying, what is the point of doing that work? Here's what I wrote in "Fulfilling Work."

In the end, we are small creatures in a big universe. We cant change the whole world but we can influence it through our interaction with those closest to us, finding joy in the process. We may not change the world by administering to the sick as doctors or nurses or psychologists, or by installing someones dishwasher, cleaning their teeth or keeping their internet running. We may not even change it by caring lovingly for our children. But the recipients of such labors may find our work significant indeed. For they received medical care, had someone to talk to, got their teeth cleaned, found an old friend on the internet, didn't have to do the dishes, or grew up to be the kind of functioning adult this world so desperately needs because of that loving parental care. These may be small things, but if they are not important, nothing is.

Perhaps then it is the sum total of our labors that make us large. Our labors are not always exciting, but they are necessary to bring about a better future. All those mothers who cared for children and fathers who worked to support them, all those plumbers and doctors and nurses and teachers and firefighters doing their little part in the cosmic dance. All of them recognizing what Victor Frankl taught, that productive work is a constitutive element of a meaningful life.

Addendum - Previous articles about high-tech and work

Irrational Protests against Google

Are Google and Facebook Evil?

https://reasonandmeaning.com/2016/10/31/summary-of-how-technology-hijacks-peoples-minds%e2%80%8a-%e2%80%8afrom-a-magician-and-googles-design-ethicist/

Fulfilling Work

Meaningful Work

Should you “Do What You Love?”

The Monotony of Work

https://reasonandmeaning.com/2015/10/05/autonomy-mastery-and-purpose-what-we-really-want-from-our-work/

Rethinking Work

Friendship is Another Reason to Work

What Is The Point of Money?

The Problem of Work-Life Balance

https://reasonandmeaning.com/2014/01/22/overworked/

Continued here:
Is It Moral To Work For A Tech Giant? - Institute for Ethics and Emerging Technologies

Posted in Transhumanist | Comments Off on Is It Moral To Work For A Tech Giant? – Institute for Ethics and Emerging Technologies

Page 5«..4567..10..»