Many sectors are speculating on what AI will entail for their futures. It might be “particularly significant” for the legal profession and “reinvent” market research. It has the potential to alter what it means to be a graphic designer.
It has the potential to deplete call centers. Will it make white-collar office workers all across the world more productive, redundant, neither, or both? An open letter signed by a diverse group of public people, including Elon Musk, Yuval Noah Harari, and Andrew Yang, calls for a “pause” on “large AI trials.” ”
These are primarily distant projections concerning one industry’s plans for all the others. Yet, within the tech business, there is a bit more certainty about where AI automation will matter the most, and the soonest. The future may as well be on auto-complete: AI is clearly coming first for software. Where else could it begin?
This is understandable as a sensation. Money is pouring into AI from the rest of the startup environment. Major IT companies are making huge AI investments while laying off hundreds of other employees.
It’s all anyone in the industry can talk about, and true believers abound; if you’re already worried that you’re not working on the next big thing, it only follows — emotionally, to you, the staff software engineer at a company that has nothing to do with AI — that the next big thing will crush you underfoot, or at the very least change your job in unpredictable ways.
Yet, the notion that software development will discover the ramifications of LLM-based AI is more than a hunch. While the general public experimented with experimental chatbots like ChatGPT and picture generators like DALL-E and Midjourney for the first time, developers were employing AI assistants at work, some of which were based on the same underlying technology.
GitHub Copilot, a coding assistant created by Microsoft and OpenAI, “analyzes the context in the file you’re working, as well as linked files, and makes ideas” on what to do next in order to speed up programming. It has recently gotten more ambitious and forceful, and it will attempt a broader range of programming duties, such as debugging and code commenting.
Copilot has received mixed reviews; at the very least, it’s a reasonably decent auto-complete for many coding tasks, implying that its underlying model has done a significant amount of “learning” about how basic software works.
Tyler Glaiel, a game developer, discovered that GPT-4 was unable to solve tricky and novel programming test problems and that, like its content-generating cousins, it has a tendency to “make shit up” anyway, which “can waste a lot of time.” However, he gave GPT-4 some credit on the question of whether it can “actually write code”:
Indeed, GPT-4 can write code given a description of an algorithm or a description of a well-known problem with plenty of existing examples on the web. It’s largely simply putting things together and remixing them, although TO BE FAIR… a lot of programming is like that.
Former Twitter VP and Googler Jason Goldman evaluated the technology through the eyes of a frequent business type: a management who cannot code. Although OpenAI was the first to deploy useful AI coding tools, Google said this week that it was collaborating with Replit, a popular software development environment, on a general-purpose coding assistance.
Replit’s CEO, an ecstatic Amjad Masad, told Semafor that coding was “nearly the perfect use case for LLMs,” and that his company’s ultimate goal was for its assistant to become “totally autonomous,” allowing it to be handled like an extra employee.
This month, SK Ventures’ Paul Kedrosky and Eric Norlin provided a more detailed bull case for AI software development: The present generation of AI models is a rocket directed directly at software manufacturing, albeit unintentionally.
Certainly, conversational AIs can produce undergraduate essays or whip up marketing materials and blog posts (as if we need more of either), but such technologies are incredible at developing, debugging, and accelerating software creation fast and practically costlessly.
This, they claim, is due in part to the fact that “[s]oftware is even more rule-based and grammatical than conversational English, or any other conversational language,” and “programming is a good example of a predictable domain.” In their opinion — fairly optimistic, but also, you know, they’re investors — this will allow people to make and use software where they previously would not have been able to, quickly relieving “society’s technical debt” before
And who knows – maybe! In any event, it is apparent that the software sector is highly exposed to the consequences of its newest products, whatever they may be, and that its workers and employers have been fast to test and accept them. It seems to reason that the impacts of Increased automation on labor — fewer jobs, more jobs, different jobs, wage pressure, displacement — will be seen early, if not immediately, in the industry where it is first and most fully deployed, and where it appears to be especially capable.
One such place is within the firms developing AI technologies. Google is a software corporation that intends to supply AI-based software to its users and clients at other companies; it is also an employer with over 150,000 employees that has recently reduced its employment by 6%. Google CEO Sundar Pichai specifically mentioned the company’s commitment in AI in his layoff announcement.
“Being limited in certain areas permits us to wager large in others,” he added. “Pivoting the company to be AI-first years ago led to significant advances across our companies and the whole industry,” he stated, highlighting the “huge opportunity in front of us with AI.”
Google is undoubtedly in an advantageous position to sell and deliver AI solutions to others. It’s also possibly the ideal customer for its own ostensibly productivity-boosting tools: dozens of offices full of coders and product managers and emailers and deck-makers and meeting-holders — not to mention countless lower-paid contractors spread around the world — testing tools in a single corporate environment.
Before Google truly understands what its products will do for and to its consumers and employees, it will most likely begin to understand what those products will do for itself. If LLM tools turn out to be highly overhyped and fail to deliver much usefulness or change, Google will be among the first to know, though they may not be keen to publish their findings.
According to OpenAI’s analysis, certain tech vocations would be more susceptible to LLM-based techniques. “We discover that roles heavily reliant on science and critical-thinking skills show a negative correlation with exposure,” the company claimed, “while programming and writing skills are positively associated with LLM exposure,” and that “around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers could see at least 50% of their tasks impacted.”
Now, one of the world’s leading LLM businesses would claim so, and a company with fewer than 400 people can easily guess outrageously about what would happen to everyone else.
(OpenAI, on the other hand, employs thousands of foreign contract laborers to help clean up its models, doing work that is potentially quite “exposed” to near-term automation.) It’s also the kind of prediction that might pique the interest of Microsoft, OpenAI‘s largest funder by far and its partner on a slew of AI-powered features in popular software such as Windows, Office, and, of course, GitHub.
Microsoft, like Google, has been lowering costs, primarily by shedding thousands of workers, including some from GitHub’s international teams. Its investment in AI can be read in two ways: as a bet on a new type of product from which it may profit, or, more immediately, as an investment in automation that simply saves it money on labor, similar to a new machine on a factory floor.