Will AI take everyone’s job? Or make work unnecessary, allowing everyone to laze about in hammocks? AI will not take your job, but the Person using AI will! Predictions are all over the place, but this week has brought a disquieting development: The New York Times’ Karen Weise reports, based on interviews and internal documents, that Amazon plans to replace more than a half million human jobs with robots. That includes about 160,000 potential human hires, which Amazon can avoid, that it otherwise would need to make by 2027. As AI becomes embedded in our economy, experts foresee both redundancies and productivity gains. In January 2024, the International Monetary Fund (IMF) predictedAI could “affect” up to 40% of jobs globally. Last month, University of Pennsylvania researchers predicted AI would boost productivity—but with only “a peak annual contribution of 0.2 percentage points [to global GDP] in 2032. After adoption saturates, growth reverts to trend.” Then there’s the question of what AI will do, exactly. As has been the case for about 10 years, workers in the middle of the skill spectrum (roughly) are expected to suffer the most AI-driven layoffs. Routine office tasks like filing can be automated, but C-suite leadership and complicated manual work can’t be, so easily. Automation has always affected rote work, the IMF wrote, but “one of the things that sets AI apart is its ability to impact high-skilled jobs. As a result, advanced economies face greater risks.” The UPenn researchers found that jobs “around the 80th percentile of earnings are the most exposed, with around half of their work susceptible to automation by AI, on average.” At Der Spiegel, Simon Book, Patrick Beuth, Angela Gruber, Max Hoppenstedt, Marcel Rosenbach and Martin Schlak write that we are still finding out just what the impact will be. AI is now doing some of the predicted tasks: One German startup, for instance, offers an AI agent for tradespeople that drafts customer cost estimates and handles other assorted paperwork. LinkedIn Co-Founder Reid Hoffman points out to the Der Spiegel authors that the rise of PCs didn’t eliminate bookkeeping as a profession; rather, it drove the same people into new jobs as financial risk analysts and portfolio managers—more interesting work, not unemployment. One way to understand AI is to note what it lacks: common sense. The Der Spiegel authors survey a nearly fully automated Amazon warehouse in Shreveport, Louisiana, where robots sort most of the products, working with other robots. “But there is still room for humans,” they write. “Amazon employee Chestney Flemming is standing in front of a screen showing a conveyor belt … In front of him, [a robot named] Sequoia is selecting a blue box from a stack … Sequoia grabs the bin and sends it along the conveyor belt to Flemming, who then picks out the correct article to fill the customer’s order and returns the bin to the robot … It would perhaps be possible to deploy a humanoid robot to do Flemming’s job, [Amazon AI head Aaron] Parness says, but it would hardly be efficient. A person needs just a few seconds to find the correct article in the box, pick it out and package it. A machine would take quite a bit longer. Humans are able to easily move among numerous different environments during the workday, fulfilling several tasks at the same time and have an ‘understanding of the world and each other.’ Machines have none of that.”
AI lacks common sense because it cannot grasp the contextual, intuitive, and social understanding that humans have. This deficiency leads to "shockingly stupid" errors, such as mistaking a picture of a stop sign for a real one or creating nonsensical statements, even while performing complex tasks like writing or coding, as highlighted in this Forbes article and this Northeastern University article. Ultimately, current AI models are trained to predict the next word in a sequence rather than to "make sense" of the world through experience, which is the foundation of human common sense. Manifestations of AI's lack of common sense Literal and nonsensical outputs: AI can produce statements that are grammatically correct but nonsensical or factually incorrect, such as writing a biography that includes a person's death without understanding the implication, as seen in this Forbes article. Failure to recognize context: AI systems can fail to apply contextual understanding, like differentiating a white wall from a white shirt or confusing the real world with the software environment, as discussed in this USC Today article and this Marcus on AI article. Lack of emotional and social intelligence: Inability to learn from experience: Despite being trained on vast datasets, some AI models fail to learn from their mistakes, repeating the same errors or being "too willing to immediately accede to user requests," as seen with the "Claude" chatbot example in this Medium article. Dangerous failures in real-world applications: The lack of common sense can have serious consequences, such as self-driving cars stopping for a picture of a stop sign on a billboard, as described in this Northeastern University article.
Why AI struggles with common sense - Training vs. understanding: AI's core function is often to predict the next word in a sequence based on patterns in its training data, not to have a genuine understanding of the world. This is different from how humans learn through making hypotheses, experimenting, and interacting with their environment, explains this YouTube video and this other YouTube video.
|
No comments:
Post a Comment