FOREIGN PRESS USA

Foreign Press in Conversation with IBM Chief Scientist Dr. Ruchir Puri on AI and Quantum Computing

FOREIGN PRESS USA
Foreign Press in Conversation with IBM Chief Scientist Dr. Ruchir Puri on AI and Quantum Computing

This week, journalist Thanos Dimadis, the Executive Director of the Association of Foreign Press Correspondents in the United States (AFPC-USA) met with Dr. Ruchir Puri, the Chief Scientist of IBM Research, an IBM Fellow, and Vice-President of IBM Corporate Technology and Technical Community, for an episode of our video series, Foreign Press One-on-One.

Dr. Puri is a Fellow of the IEEE, and has been an ACM Distinguished Speaker, an IEEE Distinguished Lecturer, and was awarded 2014 Asian American Engineer of the Year. He is also an inventor of over 70 US patents and has authored over 120 scientific papers. He discussed IBM’s goal to create “the future of computing,” describing computing as “foundational to human evolution [and] human understanding.” Much of this conversation with Dimadis focused on AI as a developing technology and its future applications.

AFPC-USA is solely responsible for the content of this educational program. Below, foreign journalists can read the takeaways from the discussion.

IBM Research and the Future of Computing

Dr. Puri framed IBM Research’s mission in sweeping, almost philosophical terms, describing it as an effort to “create the future of computing,” rooted in the idea that computation underpins not just technology, but human progress itself. He emphasized that computing is “foundational to human evolution” and “broadening human impact,” since virtually every process in the world can be understood through physics and mathematics—disciplines that naturally lead to computation. He traced a historical arc in which advances in computing map directly onto societal transformation, pointing back to the “digital revolution in the 1940s,” through IBM’s early leadership, and into key milestones: the rise of “the very first digital computer,” the dominance of mainframes as “the foundation of the entire financial transactions world,” followed by the personal computer era, the internet, and artificial intelligence. Now, he argued, the world stands at the edge of another major shift—“the next revolution… quantum computing.”

Dr. Puri cast IBM Research as both a pioneer and a guide, describing its role as “watch[ing] out for what’s next” and acting as “the headlights… not just [for] IBM… but for the world.” He pointed to the institution’s track record—highlighting its “six Nobel Prize winners” and multiple Turing Awards—as evidence of a “storied past” that positions it to keep “pushing the boundaries of computing” into the future.

Dr. Puri’s Role in Shaping IBM’s Technical Strategy

IBM Chief Scientist, Dr. Ruchir Puri

He said his role is about shaping and executing IBM’s technical direction. In truth, he is in a “very nuanced” position, centered on “watch[ing] out for the technical strategy” not only for IBM Research but for IBM as a whole, while also “formulat[ing], build[ing] a consensus around, and… execut[ing]” that strategy across teams. He argued that to lead effectively, one must understand technology across multiple layers, and he pointed to his own career as spanning “almost every level of abstraction” in computing. That range included everything from testing “silicon chips in the basement… to see how they will behave if they went in space,” to working on materials science, chip design, and software that automates that design, all the way up to AI systems, applications, and more recently “agents.”

Because he has worked across the stack, he can “bring these diverse sets of technologies together… through almost a personal experience,” and engage with researchers not just as a manager but as “a deep thinker who can debate with the teams.” The leadership dynamic he described is collaborative and rigorous. Teams “can debate with me, we can poke each other, we can push each other,” with the goal of refining ideas and aligning around a shared technical vision. Dr. Puri said that he had predicted as early as 2021 that “software engineering will be totally disrupted by AI,” well before tools like ChatGPT became mainstream. That kind of anticipatory thinking, combined with cross-domain expertise, underpins his role in identifying “what’s next” for both IBM Research and the broader company. Notably, he stressed that his job is “not just an inside-looking role,” but one that requires engagement with “other chief scientists… across academia” and global institutions.

Why Human Creativity Still Matters in the Age of AI

On the subject of AI, Dr. Puri said that in this new age, the most valuable human skill will be the ability to connect ideas across domains. While AI will excel at “depth in certain fields,” he argued that “cross-connecting these diverse sets of ideas” will remain a distinctly human advantage—one that will “carry for a long period of time.” The future belongs to a partnership where “AI [works] together with… this diverse nature of creativity” to drive progress. The “burden,” as he put it, is now on younger generations “to not just talk, but learn… in a hands-on way.” Unlike previous eras, where access to knowledge depended on rare mentors or elite institutions, today’s professionals are living through a moment where education has been radically democratized. What was once limited to “MIT and Stanfords of the world” is now “available on demand anywhere, everywhere,” even “in the remotest parts of Africa or Asia.”

He pointed out that because young professionals are “growing up in this era of AI,” they are uniquely positioned—but also responsible—for taking advantage of these tools. Puri described modern AI systems as being “almost like having an intern available to you all the time,” allowing users to iterate on ideas, write code, and build projects continuously. This, he suggested, enables individuals not just to learn faster, but to “amplify themselves 10X, sometimes 100X.” He also observed a structural shift in how work gets done. Where “big teams” once dominated innovation, he argued that “small is the new big.” With powerful AI tools, smaller groups—or even individuals—can now “move mountains,” because they can “learn, create, and act” far more efficiently than before.

Useful AI Versus Artificial General Intelligence

Dr. Puri drew a distinction between artificial general intelligence (AGI) and what he called “useful” AI, making clear that his focus—and IBM’s investment—is firmly on the latter. He described himself as “a huge fan of useful,” emphasizing that it is “here today” and already delivering real-world value, while AGI remains more of an aspirational, future-oriented goal that many in industry and media are “enamored with.” While he acknowledged that AGI is “an exciting goal… [with] deep implications,” he pushed back on the obsession with it, arguing that the more pressing question is not how closely AI can mimic human intelligence, but “how useful these technologies are today.” In his view, there are immediate, tangible problems across “business life… personal lives… [and] consumer life” where AI can already make a meaningful difference. He illustrated this with software development, where AI tools help eliminate routine tasks so developers can “focus on things that are more creative” and concentrate on “problem solving more than implementation.” He cited IBM’s internal coding agent, rolled out to “60,000 developers,” which boosted productivity by “45%”—a clear example of AI that is “useful,” even if it is “not… general intelligence.”

Inside IBM Research: A Legacy of Innovation and the Future of Computing

A major part of his argument centered on cost and efficiency. Dr. Puri challenged the AGI narrative by asking, “at what cost?” He contrasted the human brain, running on “20 watts… and sandwiches,” with AI systems that require massive computational power, noting that a single GPU can consume “1200 watts,” and AGI-level systems would require thousands of them. That gap, he argued, represents “orders of magnitude” difference in energy efficiency, making current AI fundamentally unlike human intelligence in practical terms. Most companies, Dr. Puri stressed, cannot “consume cost like crazy.” Therefore, AI’s value must be judged not only by capability but by whether it can be deployed “at the cost” businesses can justify. He said AGI as “almost a theoretical question” for now, whereas useful AI is about balancing quality and cost to deliver scalable, practical solutions.

Making AI Practical for Business

Dr. Puri said IBM is helping companies extract real “value” from AI through a balance of performance and cost. IBM’s approach is highly collaborative, working directly with companies to “understand their problems” and “roll [solutions] out” in ways that integrate into real business processes. The emphasis isn’t just on building advanced technology, but on ensuring it delivers measurable outcomes. He said “the key word… is value,” and that value depends on two things: achieving sufficient quality and then optimizing cost. He said a product that doesn’t work is useless regardless of cost—“like a car you buy… but it doesn’t drive.” So first, AI must meet a performance threshold. He also mentioned the difference between cutting-edge “frontier models” (from companies like OpenAI, Google DeepMind, and Anthropic) and what most businesses actually need. While those models “do an amazing job,” he noted that only a small fraction of enterprise use cases—“maybe five to 10%”—require that level of deep reasoning. The majority can be handled by “smaller, more efficient… domain-specific models.” Rather than relying solely on large, expensive systems, IBM is investing in building “purpose-built” models—like its Granite series—that are tailored to enterprise tasks and designed for efficiency. The goal is to make AI “more consumable” and practical, not just more powerful.

IBM’s Breakthroughs in Efficient and Reliable AI

Dr. Puri went on to highlight several concrete breakthroughs that reflect IBM’s push toward making AI more efficient, reliable, and enterprise-ready.

Thanos Dimadis with IBM Chief Scientist, Dr. Ruchir Puri

He first pointed to advances in model architecture, particularly IBM’s work with state space models in its Granite 4.0 series. These architectures allow models to “compress information more efficiently” by reducing the memory required to store model weights. By combining traditional transformer models with these newer approaches, IBM has been able to build systems that are “much more efficient” without sacrificing capability, a key step in lowering costs for real-world use. A second major breakthrough involves what he called “uncertainty calibration.” Dr. Puri noted that most AI systems rarely admit when they don’t know something—they tend to answer confidently, even when wrong. True intelligence lies not just in retrieving known information but in “knowing what you don’t know.” IBM has been working to build models that can better assess and express uncertainty, making their outputs more trustworthy and aligned with reality.

Trust, Guardrails, and Responsible AI

He then emphasized guardrails and responsible AI as another critical area of innovation. Given the risks of bias, hallucinations, and misuse, IBM has developed “guardian” systems that are integrated directly into the models. These systems help detect issues like bias, profanity, or drift and allow enterprises to define their own constraints. This ensures that AI outputs remain reliable and aligned with organizational standards. Finally, he discussed the challenge of consistency. Unlike traditional deterministic systems, generative AI is probabilistic, meaning the same input can yield different outputs. That variability undermines trust in enterprise settings. To address this, IBM is developing more structured, “programmatic” ways of interacting with AI, moving beyond simple prompting toward more controlled and repeatable systems (including tools like their “Mellea” framework). The goal is to make AI behavior more predictable and dependable.

Dr. Puri clarified that by consistency, he meant something very practical: if the same question is asked in slightly different ways, the AI should still produce a stable, reliable answer. Right now, that’s often not the case. As he put it, “you pose a question A in a different way… [and] I get different answers.” Even when the underlying intent is the same, variations in phrasing or context can lead to different outputs. That variability might be acceptable in casual use, but it becomes a serious problem in enterprise settings. For businesses, consistency is directly tied to trust. Puri emphasized that “what enterprises need more than anything else is consistency,” because trust in a system depends on knowing that “that answer… [will] hold.” If results shift depending on how a question is worded, companies can’t reliably integrate AI into workflows or decision-making.

The Future Challenge of Autonomous AI Systems

Dr. Puri also addressed what’s possible today and what’s still evolving. On the practical side, he said, “we are there today” when it comes to useful AI. Enterprises can deploy systems with governance, monitoring, and guardrails using tools like IBM’s watsonx governance and “agent ops” frameworks. These allow companies to manage bias, detect drift, and reduce hallucinations in real-world applications. But this capability is not yet seamless or built-in. Instead of coming “out of the box,” these features still require stitching together multiple systems—“agent building frameworks, agent operations… governance frameworks… and guardrails.” In other words, the pieces exist, but they’re not yet fully unified into a single, effortless solution. He further stressed what counts as “responsible” varies by user, industry, and use case, so it can’t be universally defined. As he put it, “what is responsible in one use case is irresponsible in another.” That means systems must remain flexible, allowing users—not just developers—to define their own guardrails. He identified the biggest future challenge as autonomous systems. As AI agents gain more “agency” (i.e., independence and decision-making power), they will begin to interact and collaborate with other agents. This raises entirely new questions around governance, security, and oversight. In this space, he was clear “we are not there yet at all.”

AI, Quantum Computing, and Accelerated Discovery

Dr. Puri believes humans “by and large… have good intent,” which shapes how he sees technological progress. From that perspective, the leaders of the future won’t just face challenges, they’ll be “blessed” with tools that today exist only in science fiction. In his words, “science fiction will come true,” and that shift will fundamentally expand what humanity can understand and solve. He suggested that while today’s leaders rely on their own minds, future experts will operate with intelligence that is massively augmented—“using their brain… enhanced by a million other brains at their fingertip.” That kind of capability would make decision-making and prediction “mind boggling and unprecedented,” especially when applied to society’s hardest problems. He also stressed that this future won’t be driven by AI alone. The real shift comes from the convergence of technologies—especially AI and quantum computing. He described this as a kind of “bifurcation in the road,” where traditional computing continues on one path, while quantum-enabled systems open an entirely new trajectory. AI can generate possibilities, like new materials or solutions, while quantum computing can simulate and validate them at a fundamental level under this combined model. This creates a feedback loop: “AI… creates candidates,” quantum tests them, and the results feed back into AI. The outcome is what he called an “accelerated discovery loop”—a system capable of rapidly advancing science in ways that are currently unimaginable.

Quantum Computing and the Next Technological Breakthrough

When Dimadis turned the conversation to quorum computing, Dr. Puri explained that the reason the public hears far more about AI than quantum computing comes down to one simple factor: maturity. He said every technology goes through a lifecycle, and right now AI is far ahead on that curve. It has already gone through its “breakout” phase—roughly a decade ago with deep learning and transformer models—so it’s now visible, usable, and embedded in everyday life. That’s why people talk about it constantly. Quantum computing, by contrast, is still “at the beginning of that journey.” He emphasized that its real-world impact—especially in areas like materials science, drug discovery, healthcare, and finance—is just starting to emerge. Before those applications can scale, the technology still needs to solve fundamental challenges, particularly error rates. Current quantum systems are not yet “fault tolerant,” meaning they produce too many errors to be widely useful.

Dr. Puri pointed to a key inflection point around 2029, when fault-tolerant quantum systems may become viable. He described that moment as analogous to AI’s breakthrough—a kind of “supernova explosion” that could push quantum into mainstream awareness and application. Until then, the general public has little reason to engage with it. As he put it, it’s simply “not yet to a stage where [the] general public needs to care about it.” Scientists and experts are paying attention, but widespread attention tends to follow practical, visible use cases—and those aren’t fully here yet for quantum. The one exception is security. He warned that enterprises should already be preparing for the impact of quantum on cryptography. Existing encryption systems could become vulnerable, so organizations need to start transitioning to “quantum-safe” approaches now—even before the technology fully matures.

AI, Jobs, and the Future of Work

Dr. Puri pushed back strongly against the idea that AI will hollow out the job market, framing that view as largely “fear mongering.” Instead, he argued that we’re entering a “golden era” in which technology will expand opportunity rather than shrink it. He acknowledged the widespread prediction that “80% of entry-level jobs will be lost,” but said he takes “the opposite view.” His reasoning is rooted in scale: AI isn’t staying confined to tech companies—it’s spreading into every industry. As he put it, this technology will move into “transportation… healthcare… finance… every industry you can think about.” That expansion, in his view, creates demand—not decline. These systems don’t build themselves; “engineers will be needed to build agents in all these industries.” And beyond building, there’s the harder challenge of integration. The real world, he emphasized, is “very messy”—full of outdated systems and fragmented data. Bringing modern AI into those environments requires significant human effort and this is where he envisions job growth.

He then used a historical analogy: when the wheel was invented, certain forms of labor disappeared, but human productivity surged. “Did that job go away? Yes… but did we as humans stop evolving?” (No.) Instead, each technological leap has increased overall output, economic growth, and new forms of work. He believes AI will amplify productivity “even higher” than past technologies and could become a “stepping stone” to solving major global challenges—from climate to healthcare to inequality.