Should We Fear the Rise of A.I.?

shutterstock_robo2
Picture Credit: The Register

Earlier this September, billionaire entrepreneur Elon Musk stirred up a huge Twitter-storm when he posted that global competition in artificial intelligence (AI) superiority could potentially lead to World War III. This tweet came after Russian President Vladimir Putin declared, “Whoever becomes the leader in [artificial intelligence] will rule the world,” to which Musk tweeted, “It begins….”

Of course, Elon Musk is rather infamous for making grandiose predictions and promises that often fail to come true. In 2016, he announced that his company SpaceX will master space travel and colonize Mars as early as 2024, only to pull the plug less than a year later when he realized that traveling to Mars in 25 minutes isn’t exactly feasible. However, Musk’s tweet about World War III has been one of many such warnings about the dangers of artificial intelligence, going so far as to reference the “Terminator” movies.

“AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that,” he stated at the 2017 National Governors Association in Rhode Island.

But is the situation really that dire? The CEO of robotics and computing company Neurala Massimiliano Versace argues that these doomsday predictions surrounding AI are all largely unsubstantiated. In fact, his biggest complaint so far is that non-experts like Musk who have no clue about how AI actually works seem to be dominating the discussions. In contrast to Musk’s warnings, Versace says that it is much too early to start regulating AI and that doing so would hinder innovation.

Several other critics have also voiced their opinions addressing the robot apocalypse scenario that Musk seems to predict. CEO and co-founder of Google Larry Page made the case that AI is designed to make people’s lives easier so that they have more time to pursue their own interests. Likewise, Facebook’s Mark Zuckerberg compared fears about AI to early fears about airplanes and encouraged people to “choose hope over fear.”

On the other hand, it’s not like Musk is the only dissenting voice in the room. Renowned theoretical physicist Stephen Hawking similarly expressed how artificial intelligence could spell the end of the human race, and Microsoft’s Bill Gates voiced his worries that AI might become a problem after it becomes intelligent enough to dominate the workforce.

However, rather than a “Terminator”-style takeover, the bigger concern for me from a cultural perspective is the direction that AI might take the world in.

It’s undeniable that today’s society places a disproportionate amount of attention on science and technology over any other discipline. Given how dependent on machines we’ve become, it’s no surprise that so many people hold degrees in math-intensive STEM subjects such as computer science, robotics, and electrical engineering and that we place these individuals on lofty pedestals. As a result, pursuing a degree in the humanities is widely seen as a high-risk gamble considering the increasingly bloodthirsty modern arena known as the job market. But the problem here is that the widespread implementation of AI will likely exacerbate this issue even further.

Last March, U.S. Treasury Secretary Steve Mnuchin brushed aside all concerns about AI and stated that “In terms of artificial intelligence taking over the jobs, I think we’re so far away from that that it’s not even on my radar screen.” Unfortunately, Mnuchin couldn’t be more wrong. In reality, AI has already started to seep into the workforce.

Let’s list some examples. In San Francisco, Simbe Robotics’s Tally robot can navigate around human shoppers at the supermarket to make sure that everything is stocked, placed and priced properly. Meanwhile, in Japan, Fukoku Mutual Life Insurance has already replaced 30 of its employees with an AI system that can analyze and interpret data better and much faster than a human can. Artificial intelligence is also replacing financial analysts in the business sector simply because it can predict market patterns faster.

Not only that, careers thought to be safe from the encroaching tech revolution—such as journalism and teaching—are now at risk as well. For instance, companies such as Narrative Science and Automated Insights have created AI bots that write countless business and sports articles for clients like Forbes and the Associated Press. The United States military also relies on a computer-generated virtual therapist to screen soldiers in Afghanistan for PTSD, and physical robots are being used in Japan and Korea to teach English. Even actors could be replaced by some kind of technological innovation like with Grand Moff Tarkin in “Rogue One: A Star Wars Story.” Given the efficient and cost-effective nature of AI, it won’t be long until these systems are used in practically every industry.

Of course, there are various reassuring arguments out there. A common response is that new jobs will naturally form once old jobs are filled. However, exactly what kind of job do you think will be in demand once more and more companies implement AI in their business? A really insightful article by Clive Thompson has a headline that states it best: “The Next Big Blue-Collar Job Is Coding.” Sure, jobs won’t completely disappear, but I predict that the tech industry will be the only area in dire need of employees.

Another common response is that a greater focus on STEM education will eventually solve everything. Jenny Dearborn, an executive at the software company SAP, argues that young people today have a responsibility to become more educated in technology. “If you want to do health care, major in tech with a healthcare focus. If you want to be a neuroscientist, major in tech with that focus,” she emphasized.

However, that’s easier said than done. The United States already lags behind in STEM education compared to the rest of the world, and considering how our current Secretary of Education is a billionaire who has spent millions of dollars fighting against government regulations and crippling teachers’ unions by taking away their right to strike, I’m not feeling too optimistic. Plus, what if you’re simply not naturally inclined toward skills in STEM? What about people who just don’t enjoy it?

Obviously, the last thing I want to do is bash the STEM disciplines and discourage people from pursuing STEM careers. I truly believe that science and technology can inspire wonder and excitement for everyone. However, I worry that students who discover their passions in the humanities will likely end up squeezed to death under the STEM-oriented educational system even more than they do today. As a college student who once had plans of majoring in the humanities, I’d hate to imagine what job searching will be like in a future where AI made has made that notoriously grueling, overly competitive process even harder.

Originally published on September 14, 2017, in The Miscellany NewsGlobal job industries should prepare for growth in AI

Advertisements

Unlocking Axolotl: The Path Towards Regenerative Medicine

bA8iBYO
Picture Credit: Utaranews.com

Out of all the various superpowers found in comic books and video games, regeneration is among the most astonishing. The idea of being able to regrow an arm or a leg whenever one is lost in an accident exemplifies a sort of uncanny magical ability straight out of science fiction. However, this ability serves as an adaptive trait for several different animals around the world.

While notable examples include sea stars and certain species of lizards, the most prominent kinds of animals known for their regenerative capabilities are salamanders, a species known for its ability to regrow entire limbs and regenerate parts of major organs like their heart, their eyes and their spinal cord. They possess such impressive regeneration abilities that immunologist James Godwin of the Australian Regenerative Medicine Institute at Monash University in Melbourne calls them “a template of what perfect regeneration looks like.”

One specific salamander species that deserves special attention is the axolotl, also known as a Mexican salamander (Ambystoma mexicanum). This amphibian, in particular, has a one-of-a-kind capacity for regeneration and is known for being able to regrow multiple structures like limbs, jaws, skin and even parts of its brain without evidence of scarring throughout their lives.

The sheer amount of damage that an axolotl can recover from is absolutely extraordinary.

“You can cut the spinal cord, crush it, remove a segment, and it will regenerate. You can cut the limbs at any level–the wrist, the elbow, the upper arm–and it will regenerate, and it’s perfect. There is nothing missing, There’s no scarring on the skin at the site of amputation, every tissue is replaced. They can regenerate the same limb 50, 60, 100 times. And every time: perfect,” remarked Professor Stephane Roy at the University of Montreal.

As a result, the axolotl is widely used as a model organism for studying regeneration. But this begs the question: can this amazing regeneration ability be somehow transferred to humans? If human beings had the same regenerative capacity as axolotls, the benefits would far surpass that of regrowing an arm or a leg or a finger. People would be able to repair or regrow their internal organs whenever an organ failure occurs without having to rely on intensive surgery.

For instance, victims of car accidents may end up with major injuries to their backbone, their ribcage and all the soft major organs within, but a regeneration ability equivalent to that of an axolotl may have them walking normally after a mere few months. Not only that, the axolotl is over 1,000 times more resistant to cancer than mammals. Finding the source of this salamander’s regeneration capabilities could lead to unimaginable developments in modern medicine.

However, while the idea sounds fantastic, the execution is much more difficult than it looks. Compared to amphibians, humans have very limited regenerative capabilities, restricted primarily to their skin. So far, research into salamanders has led scientists to pinpoint the blastema, a mass of immature cells typically found in the early stages of an organism’s development, as the key to regeneration. Essentially, when an adult salamander limb is amputated, the outermost layer of skin covers up the wound and sends signals to nearby cells, which prompts the mature cells to form the blastema. From there, the immature cells start to divide and differentiate into specific muscle and nerve cells until a different signal or some form of memory tells the cells to stop regenerating.

For scientists to replicate this effect in humans, they use stem cells, which are also cells that can also differentiate into any type of cell in the body and divide to produce more stem cells. These cells are also known as pluripotent cells since they are capable of developing into several different cell types. However, the blastema that salamanders produce is not completely embryonic. Instead, scientists have found that the cells used for regeneration become slightly less mature versions of the cells they’ve been before. This means researchers don’t have to force adult tissue into becoming pluripotent, making the task a little easier to implement in humans.

The latest development in this field has come from a group of scientists from the University of New South Wales (UNSW), who have designed a new stem cell repair system based on the method used by salamanders to regenerate limbs. According to hematologist John Pimanda, the new technique involves reprogramming bone and fat cells into induced multipotent stem cells (iMS), which can be used to regenerate muscle, bone and cartilage. The team first extract fat cells from the human body, treat them with various growth factors and compounds like 5-Azacytidine (AZA) to turn them into stem cells, and then inject them back into the body to heal tissue.

“This technique is a significant advance on many of the current unproven stem cell therapies, which have shown little or no objective evidence they contribute directly to new tissue formation,” stated Pimanda.

So far, the new technique has been successful in mice, and human trials are expected to begin by late 2017. But several obstacles still stand in the way. One primary challenge is preventing the cells from becoming cancerous as they go through regeneration. Salamanders typically don’t face the risk of malignant tumors whenever they regenerate tissue, and as stated earlier, the axolotl is in fact 1,000 times more resistant to cancer than mammals, despite how often it regenerates body parts. Right now, Pimanda and his team are making sure that the technique leads to controlled tissue repair and that cell regeneration doesn’t spiral out of control.

With progress being steadily made in regenerating bone and muscle, it may be only a matter of time until we reach the regenerative capabilities of salamanders and have self-repairing organs in the future. A revolutionary development like that would certainly save lives and help all types of patients from those suffering from third-degree burns to those who desperately need an organ donor. Until then, researchers will continue to study salamanders and their incredible regeneration abilities to help guide them towards this goal.

Originally published on November 30, 2016, in The Miscellany NewsResearch on regeneration proves beneficial

The New Age of DNA: How CRISPR Will Change the World

dna-cut-and-paste
Picture Credit: Samantha Lee | Business Insider

Imagine traveling back in time to the early 1900’s and trying to explain to someone about the modern computer. It’s a box with buttons and a screen that allows people to access and manipulate all sorts of information. They would have no idea what you’re talking about and would brush it off as some sort of fancy mechanical encyclopedia. You’d want to tell them just how much this invention has changed the world, but you might have trouble quantifying the sheer impact of this technological cornerstone of history.

Now, imagine a technological breakthrough of that same magnitude in the twenty-first century—except instead of computers, it’s gene editing. Thanks to the invention of CRISPR-Cas9, we are currently at the cusp of a new DNA revolution. Yet, most people know very little or nothing about what CRISPR is and what it can do.

CRISPR-Cas9 is a unique gene editing tool that allows scientists to cut out segments of DNA from the genome of any organism and move them around or replace them entirely with stunning precision.

Similar to how bacteria slice off pieces of DNA from invading viruses to absorb, CRISPR relies on a specific RNA molecule to locate the desired sequence of DNA and slice it out. To perform this incision, CRISPR uses a protein known as Cas9, a special enzyme guided by RNA to target and snip out segments of DNA. As co-discoverer Jennifer Doudna, a professor of biochemistry at the University of California, Berkley, describes it, CRISPR is essentially “a molecular scalpel for genomes.” Think of it as the cut-and-paste tool in Microsoft Word except with the basic building blocks of life instead of numbers and text.

“You’re only limited by your imagination,” said Dustin Rubinstein, the director of the University of Wisconsin-Madison Biotechnology Center. He envisions that CRISPR can transform practically any science of medical field or discipline from cancer research and neuroscience to chemical engineering and energy production.

Some readers may be a little puzzled over the enormous fanfare in science circles around gene editing and CRISPR. Sure, this technology seems groundbreaking, but why should anyone other than scientists care? CRISPR may turn out to be one of many scientific breakthroughs featured in the news that soon disappears from the public eye.

CRISPR is not just a passing science trend. The tool allows humans to modify and rearrange DNA, which determines how the bodies of all living things function. Depending on what part of the genome the changes are made, they can be permanent. It’s possible that tweaking done in an animal or human can be passed down through generations. A tool of this magnitude, like the modern computer, has infinite possibilities.

“It is totally changing how we scientists genetically modified cells and even organisms. What used to take years and potentially millions of dollars can be done in weeks or months for a few thousand bucks,” said Paul Knoepfler, an associate professor in the Department of Cell Biology and Human Anatomy at the University of California, Davis.

CRISPR has the potential to curtail or even eradicate certain diseases. It’s been shown to be capable of removing the DNA of the virus responsible for causing HIV from a patient’s own genome. In another example, researchers are planning to use CRISPR to treat and possibly cure blindness. After scientists successfully cut out a genetic mutation responsible for blindness in mice, biotechnology companies such as Editas Medicine began devising a way to use a similar technique on humans. This is the first step in a long road that could eventually lead to the eradication of many hereditary diseases, from Huntington’s disease to sickle-cell anemia.

So far, scientists have been experimenting with gene editing on a wide range of areas in order to address problems that have long plagued humankind. Last year, scientists genetically modified the genome of mosquitoes to make them resistant to Plasmodium falciparum, the parasite responsible for causing malaria. With CRISPR’s precision and accuracy, the researchers were able to insert the necessary genes into the mosquitoes’ DNA. The mosquitoes could then replicate and pass down those engineered genes onto their offspring even after mating normal mosquitoes, creating a lineage of malaria-resistant mosquitoes.

As further evidence of CRISPR’s futuristic capabilities, Harvard geneticist and CRISPR pioneer George Church believes he can use the tool to genetically modify endangered Indian elephants into “woolly mammoths” capable of surviving in the freezing wilderness of Siberia. As a first step, Church has inserted the mammoth genes for small ears, subcutaneous fat, and hair length and color into the DNA of lab grown elephant cells. Other scientists have expressed hopes to resurrect extinct species such as the passenger pigeon (Jurassic Park, anyone?). These ideas may teeter on the border of science fiction, but CRISPR makes it conceivable.

That’s why it’s important to understand the latest developments in CRISPR-Cas9 technology, both its advantages and flaws. Few people are aware of the emerging CRISPR revolution. According to a 2016 report by the Pew Research Center, 68% of adults responded that they were “somewhat” worried or “very” worried about human gene editing. But most people have no idea what they are worried about; about 90% knew little or nothing about gene editing in the first place.

Many respondents expressed doubts about using gene editing on human babies to reduce the risk of serious diseases. “It’s messing with nature. Nothing good can come from that,” stated one participant. Another talked about how gene editing would “open the door to more manipulation of humans in an attempt to create a superior race.”

Without more comprehensive understanding about how CRISPR works and how the scientific community is embracing the revolution, it’s easy for misconceptions to form. Once unsubstantiated fear and paranoia take hold, scientists will have a much tougher time implementing the research needed to save countless lives.

The research shows that more knowledge leads to more understanding and acceptance. Pew found that those who were somewhat familiar with gene editing were more inclined to view it as something they might consider using for their child if it were available. We need personal engagement for people to actively seek out information about this tool, if CRISPR is to fulfill its promise.

Like the early computer, CRISPR-Cas9 has incredible potential. Yes, it poses technical challenges and critics have suggested several frightening scenarios if it is misused, but there are many life-changing opportunities as well. We have the chance to challenge various types of cancer at a molecular level, address the environmental damage we’ve caused on the planet, slow the spread of disease and disability and improve the quality of life for everyone.

It’s the responsibility of everyone to be informed about the scientific and ethical issues surrounding its development.

“This is a remarkable technology, with many great uses. But if you are going to do anything as fateful as rewriting the germ line, you’d better be able to tell me there is a strong reason to do it. And you’d better be able to say that society made a choice to do this—that unless there’s broad agreement, it is not going to happen,” stated Eric Lander, the president and founding director of the Broad Institute at Harvard and MIT.

Originally published on October 4, 2016, in Genetic Literacy ProjectHow CRISPR could change the world—And why that frightens many of us

Introducing D-Wave: The New Era of Quantum Computing

 

Nost_Quantum-Computing_figure1
Picture Credit: Eric Nost | D-Wave Quantum Computer | 2014 | Edge Effects

In 1965, co-founder of Intel Gordon Moore made a famous prediction: the number of transistors on a microprocessor chip, and thus its performance, will continue to double every two years. Widely known as Moore’s law, this bold claim transformed the semiconductor industry as processor chip manufacturers rushed to fulfill this prediction every year. While attempts have been successful for the past 60 years, transistors are now starting to reach their physical limitations and Moore’s law finally seems doomed to fail. But computer software company D-Wave Systems, Inc. has a solution: if processors can no longer speed up on a classical level, then why not design processors that function on a quantum level?

While the topic of quantum mechanics may appear daunting at first, the general basis behind this fascinating science is rather simple. To provide a brief summary, quantum mechanics is a relatively new branch of physics that focuses on processes happening at an atomic lev­el. It arose when scientists realized that rules of physics governing the world at a large scale don’t seem to match the behavior of subatomic particles like electrons and photons. While ob­jects in classical mechanics exist in a specific place at a specific time, objects in quantum me­chanics can exist in numerous different places and be different things at the same time. Additionally, quantum mechanics isn’t just con­fined to theoretical physics. Companies that can incorporate this new variable into their machinery could potentially change the entire industry of their field, which is ultimately what D-Wave is currently trying to accomplish.

For the past several decades, the semiconduc­tor industry built faster, more efficient processor chips by building increasingly smaller transistors. The idea is that the smaller the transistors are, the more you can squeeze on a microprocessor chip and the faster the chip can process informa­tion. So far, the most advanced microprocessors have circuit features that are only 14 nanometers long, which is smaller than the size of most vi­ruses. Experts estimate that in a few years, manufacturers may start building tran­sistors that are only 10 atoms long. But despite the industry’s successful track record, there is a definite limit to how small a transistor can be be­fore it becomes too unreliable to use. That is why D-Wave used the bizarre rules of quantum phys­ics to create a supercomputer that can process data much faster than what classical computing will ever be capable of. In 2015, the company announced the construction of the world’s first fully-operational quantum computer, the D-Wave 2X system.

The big question for most people, however, is how does a quantum computer work? Canadian Prime Minister Justin Trudeau gained huge me­dia buzz on the internet when he offered his own explanation: “What quantum states allow for is much more complex information to be encod­ed into a single bit. A regular computer bit is either a 1 or 0—on or off. A quantum state can be much more complex than that because as we know, things can be both particle and wave at the same time and the uncertainty around quantum states allows us to encode more information into a much smaller computer.”

Although initially impressive, Trudeau’s ex­planation is not quite correct. While it is true that conventional computers use bits that are either a 1 or a 0, the quantum states of quantum comput­ers don’t allow more information to be squeezed into a single bit. A quantum bit, or a qubit, is in a state of complete mystery until it is measured, at which point, it becomes a normal 0 or 1. However, what makes quantum computers so powerful is that being in that state of complete mystery, known as superposition, allows a qubit to be 0, 1 or both at the same time. This characteristic is what gives quantum computers like the D-Wave system their most valuable feature: the ability to consider all possi­bilities simultaneously and choose the best one.

While a classical computer solves a problem by considering each possible solution one at a time, experts estimate that a quantum computer can process and solve a problem up to a 100 mil­lion times faster than a conventional computer. Given this extraordinary speed, it’s little wonder that Google, NASA, Lock­heed Martin and the U.S. Department of Energy have all showed great interest in D-Wave’s prod­uct.

Quantum computing allows the tech indus­try to not only meet the predictions of Moore’s law, but also greatly surpass them. The D-Wave 2X system is expected to become an irreplace­able asset in solving optimization problems that require sifting through enormous stockpiles of data. Examples of the D-Wave computer’s appli­cations range from finding more accurate pat­terns in weather to becoming the cutting-edge tool in financial analysis that’s worth millions of dollars. Its ability would usher a new age of computing that scientists in the past would have deemed impossible.

Despite all the promise that it offers, quan­tum computing remains a relatively infantile field. The D-Wave system is not perfect and their processing capabilities are currently limited. In addition, quantum computers like D-Wave’s cost between $10 million to $15 million due to the difficulty behind building one. These machines require liquid nitrogen to cool its hardware to just above absolute zero (-273.15 C) in order to maintain its quantum state and any interference, whether outside or inside, could potentially destroy the fragile balance in the system. Yet despite the difficulties involved, the field of quantum computing has attracted the attention of international organizations around the world who are interested in a slice of this revolution­ary new benchmark in scientific achievement. In the end, we should expect further details about the semiconductor industry’s latest and greatest solution as it seems like Moore’s law won’t be broken anytime soon.

Originally published on April 27, 2016, in The Miscellany NewsMore quantum computing research needed