Copy {
"results": [
{
"title": "To boost research, states are building their own AI-ready supercomputers",
"url": "https://www.science.org/content/article/boost-research-states-are-building-their-own-ai-ready-supercomputers",
"snippet": "In 2024, the Nobel Prize in Chemistry went to the Google DeepMind team behind AlphaFold, a disruptive artificial intelligence (AI) system that predicts the 3D structure of proteins based on their amino acid sequences. But proteins by nature are shapeshifters, morphing as conditions like pH or temperature change. To forecast those transformations, University at Buffalo (UB) structural biologist Thomas Grant is building a homegrown AI spinoff called SWAXSFold. And he's doing it without Google's deep pockets.... Universities are hard-pressed to afford the costly, cutting-edge AI chips that systems like SWAXSFold require, and that Silicon Valley tech giants are stockpiling in their race to build ever-larger chatbots. But New York state is now stepping in, giving researchers like Grant access to the computers they need through a $500 million, 10-year initiative called Empire AI. Launched last year, it will inaugurate its second supercomputer in the coming monthsâone expected to be among the most powerful AI-focused academic supercomputers in the nation.... Empire AI, a consortium of nine New York universities and the Flatiron Institute of the New York Cityâbased Simons Foundation, aims to close those gaps. The state is providing the bulk of funding, and Simons donated Empire AI's first supercomputer, Alpha. The machine, housed at UB and powered by nearly 200 of some of NVIDIA's most advanced GPUs, is modest by AI standards.... But the second machine, Beta, will be an order of magnitude more powerful, driven by NVIDIA's latest, coveted Blackwell chipsâa rare coup for the academic community. Those chips will be linked by a powerful network, allowing them to pass information far faster and with less hardware than even systems like DOE's El Capitanâthe most powerful supercomputer in the worldâaccording to Ian Fisk, an Empire AI board member and the Simons Foundation's chief technology officer. 'That improvement in efficiency translates into the number of science problems you can look at,' Fisk adds.... In its first year, Empire AI has already served more than 350 researchers at the member institutions. One is Columbia University statistician Tian Zheng, who is attempting to combine many scattered cluesâweather records, neighborhood flood reports, and city incident logsâto predict where flash floods will strike in urban areas. Such fine-grained forecasts are beyond the reach of conventional climate models. 'There are stages of research [that are] high risk and exploratory in nature,' she says. Having access to Empire AI's widely available, powerful machines 'vastly expands your imagination.'... For New York University (NYU) neuroscientist Christine Constantinople, the biggest change has been speed. She trains virtual neural networks to mimic how brains make decisions. Runs that took 1 week on NYU's clusters now finish in 1 day on Empire AI, letting her explore different architectures, scale up model size, and iterate more freely. 'It becomes transformative in terms of your ability to explore different possible models,' she says.... Weill Cornell Medicine computational biomedicine researcher Ekta Khurana hopes to harness AI to diagnose an aggressive, treatment-resistant subtype of prostate cancer her team has identified. They plan to train machine-learning models on pathology images from patient biopsiesâwork Cornell's own clusters couldn't support. With Alpha, however, they managed to train a model on hundreds of images; with Beta, they hope to reach more than 10,000.... By 2027, Empire AI anticipates completing its third supercomputer, Gamma, which will be another 10 times more capable than Beta. A fourth machine, Delta, would come after that. 'The idea is to keep a machine which is state of the art for the length of the program,' Fisk says.... Federal efforts to create AI infrastructure continue. A panel of experts has recommended spending $2.6 billion over 6 years to move NAIRR beyond existing NSF-funded GPU clustersâwhere unmet demand often results in long queues, says Hanna Hajishirzi of the Allen Institute. Although a 56% cut in NSF's 2026 budget proposed by President Donald Trump would make it difficult for the agency to advance NAIRR past its pilot run, the administration has consistently indicated that AI is one of its few scientific priorities.... Last week, for instance, Trump signed an executive order for a new Genesis Mission that directs DOE to integrate troves of federal scientific data sets into a centralized platform for AI-based research.\nThe directive comes on the heels of an announcement in October that DOE would join with NVIDIA and Advanced Micro Devices to build nine new AI-focused supercomputers. With the companies covering part of the cost of the systems, the deals could help DOE labs keep up with the latest hardware. But it's unclear how far that support will go. Tech firms usually 'aren't in the business of philanthropy,' Norman says.",
"date": "2025-12-05",
"last_updated": "2025-12-08"
},
{
"title": "AI immunologists are here: Are they ready for prime time?",
"url": "https://www.science.org/doi/10.1126/sciimmunol.aea8735",
"snippet": "Large language model (LLM)âbased artificial intelligence (AI) agents are powerful tools that can help researchers automate complex tasks such as literature review, data mining, computational code generation, and summarization of existing knowledge, but they can still fall short in developing original biological hypotheses and insights (see related Research Article by Rodriguez-Coffinet et al. in this issue). Emerging advances in multiagent systems and human-agent collaborative frameworks offer promising steps forward.",
"date": "2025-12-05",
"last_updated": "2025-12-07"
},
{
"title": "As Energy Department prioritizes AI and fusion, basic research faces squeeze",
"url": "https://www.science.org/content/article/energy-department-prioritizes-ai-and-fusion-basic-research-faces-squeeze",
"snippet": "The U.S. Department of Energy (DOE) is reorganizing its scientific efforts by establishing new offices for types of research favored by the White House. A two-paragraph press release and revised organizational chart issued on 20 November announced the launch of an Office of Fusion and an Office of Artificial Intelligence and Quantum. DOE provided no further details, but the move worries scientists. The new offices could be formed by carving pieces out of DOE's storied Office of Science, the United States's largest funder of the physical sciences and its builder of big facilities from x-ray sources to atom smashers.... Observers say the change suggests the Office of Science, which has an $8.2 billion annual budget and funds everything from biofuels research to particle physics, could shrink and effectively become DOE's office of misfit research. 'We can't abandon the rest of science,' warns one longtime DOE observer who requested anonymity to protect relations with the agency.... On 24 November, President Donald Trump underscored his administration's emphasis on artificial intelligence by signing an executive order to launch the Genesis Mission, a national effort 'to unleash a new age of AI-accelerated innovation and discovery that can solve the most challenging problems of this century.' Genesis will be led by DOE Under Secretary for Science Dario Gil.... In a four-page open letter to the DOE research community, Gil, a computer scientist, calls for 'an integrated science and security platform connecting the world's best supercomputers, AI systems, and next-generation quantum computers to the most exquisite scientific instruments in the nation' to create 'the most complex and powerful scientific instrument ever built.'... Gil's letter doesn't mention the reorganization, and at press time, DOE had not responded to emailed questions. But the new AI-quantum office is sure to play a prominent role in Genesis.... The need for the fusion office has grown with the emergence of multiple private companies aiming to achieve fusion power in a few years, say Martin Greenwald, a physicist at the Massachusetts Institute of Technology and co-founder of Commonwealth Fusion Systems. Still, the new office will answer to Gil and not to the undersecretary for energy, 'protecting it a little bit' from the pressure to produce immediate commercial results, Greenwald says.... Placing both AI and quantum technologies in a single office makes sense, says Steven Girvin, a quantum engineer at Yale University, as AI has emerged as a powerful tool for developing quantum computers. The office's effectiveness will depend on coordinating myriad strains of research, he says. 'It's nice to have one box on the org chart that sees the big picture of all the quantum investments,' he says. 'But whoever does that work has to spend all their time then making connections to all the other types of science that quantum needs.'... The AI office may absorb the Office of Science's advanced scientific computing research program. In October, DOE announced a new strategy for that program, in which companies such as NVIDIA and Oracle would build and maintain cutting-edge, AI-focused supercomputers at the national labs and DOE would buy time on them. But Jack Dongarra, a computer scientist at the University of Tennessee, Knoxville, worries DOE's existing strength in high-precision simulations of phenomena ranging from climate to cosmology could be at risk.... Machines designed for AI may perform worse than current ones for simulations, Dongarra says. 'For AI, [the arrangement] is a good thing, but it's not good for the scientific computing and the simulations that we are traditionally doing.'... The reorganization could even cost the Office of Science some of the 10 national labs it currently controls. The Princeton Plasma Physics Laboratory would likely be transferred to the new fusion office, sources say. However, practical problems may slow the changes. DOE has recently lost many employees, Decker notes, so staffing the new offices, especially the one for AI and quantum, 'is going to be a real challenge.'... Some observers question how dramatic the reorganization will be. Forming the new offices may be a 'performative' gesture designed to show the department embraces the White House's priorities, says one former Office of Science senior manager. 'But on the ground, how much is really going to change?' Still, he adds, 'I think the marginalization of the Office of Science is real.'... Indeed, for 80 years, Democrats and especially Republicans have acknowledged the importance of the kind of basic research DOE pursues, says Robert Rosner, a physicist at the University of Chicago and former director of Argonne National Laboratory. The White House's emphasis on hot technologies such as AI may break with that tradition, Rosner says.... 'The compact made during World War II that the federal government has to play the leading role in maintaining science in the United States, that's coming into question,' he says. 'And that's really concerning.'",
"date": "2025-12-02",
"last_updated": "2025-12-08"
},
{
"title": "Watch this tiny robot somersault through the air like an insect",
"url": "https://www.science.org/content/article/watch-tiny-robot-somersault-through-air-insect",
"snippet": "The world's most daring stunt pilot would struggle to outmaneuver a fruit fly. Aerial insects are some of the nimblest creatures on the planet, expertly pulling off rapid turns, abrupt stops, and midair flips with an agility engineers have long strived to bestow on similar-size flying robots or drones. Now, a group of scientists at the Massachusetts Institute of Technology (MIT) has taken a major step toward that goal. Their tiny winged robot, described today in Science Advances, is faster and more acrobatic than any of its predecessors âeven approaching the agility of real insects.... The newly unveiled device represents 'a dramatic leap forward in microrobot performance,' says Hoang-Vu Phan, an aerospace engineer at the University of Nevada, Reno who wasn't involved in the study. 'This work brings the field closer to truly autonomous, insect-scale flying robots capable of real-world tasks.'... Although aerial drones and other flying machines have gotten more sophisticated in recent years, shrinking them down to the size of an insect has proved remarkably tricky. 'You have to build everything from scratch,' says YuFeng 'Kevin' Chen, an engineering physicist at MIT and co-author of the new study. Motors become less efficient at small scales, and even small amounts of turbulence strain flapping wings and tiny joints, which are typically thin and delicate to save weight.... And whereas real insects can smack into windowpanes unscathed and withstand intense gusts of air, the flying microbots aren't nearly as durable, often breaking down quickly because the synthetic materials used in their construction simply can't match the toughness of real insect body parts.... Chen's team overcame many of these hardware-related challenges in a previous project, crafting a resilient, 750-milligram flyer that could remain airborne for 1000 seconds at a time. But its controllerâthe electronic 'brain' that tells a robot what to doâpresented another dilemma. To successfully accelerate, turn, and flip through the air, a flying microbot must constantly adjust for tiny changes in airflow and friction, requiring an extremely efficient controller that can handle uncertainty.... Study co-author Jonathan How, an astrophysicist and aeronautical engineer at MIT, tackled this issue by designing what's known as a tube model predictive controller (MPC). If a person is trying to get from point A to point B, How explains, they might plan out a single direct path. But if they get blown off course, how do they know the original path is still safe? A tube MPC solves this problem by creating a tube-shaped buffer zone around a robot's central trajectory, ensuring that disturbances don't knock it into dangerous territory.... But the real 'special sauce' of the controller, How adds, was the addition of a neural network, a type of computer software or algorithm, that mimics the central nervous system of real fruit flies. This programming allows the controller to quickly plan optimized paths, enabling the robot to aggressively twirl through the air 'in such a way that it doesn't kill itself,' he says.... The resulting device, which measures just 4 centimeters across and weighs less than a paper clip, flies almost five times faster and accelerates twice as quickly as existing microbots. It can also execute sharp turns while enduring 160-centimeter-per-second wind gusts andâperhaps most impressivelyâcan complete 10 consecutive somersaults in 11 seconds. As Phan notes, the robot demonstrates 'levels of speed, agility, and robustness previously observed only in real insects.'... Of course, several important limitations remain. 'One of the elephants in the room is to get rid of the tether,' says Pakpong Chirarattananon, a roboticist at the University of Toronto who wasn't involved in the new work. Because an insect-size battery would burn out quickly, he notes, the device must stay attached to an external power sourceâlimiting its movement.... The tether represents a long-term obstacle, but Chen and How also hope to design cameras and other sensors that are small enough to fit onboard the robot, which could make it valuable for search-and-rescue missions. 'If there's an earthquake,' Chen explains, 'we could send these tiny robots into the cracks.'\nInsect-size flying robots are also being eyed as tools for assisted pollination, but How says landing a robot on a delicate flower might be pushing it. 'We'd love to do thatâjust fly in and sort of dive bomb,' he says. 'But that's a little beyond where we are.'",
"date": "2025-12-03",
"last_updated": "2025-12-04"
},
{
"title": "Political persuasion by artificial intelligence",
"url": "https://www.science.org/doi/10.1126/science.aec9293",
"snippet": "Technological advances have layered another concern into this arena: Will artificial intelligence (AI) technologies supercharge the spread of misinformation and the manipulation of public opinion to the detriment of democratic governance? Hackenburg et al. (2), on page 1016 of this issue, and Lin et al. (3) report a varying capacity of generative large language models (LLMs) to persuade citizens about political matters. These studies find that AI can be effectivelyâalthough not extraordinarilyâpersuasive, and they raise important concerns about the scope and effect of AI-generated misinformation.... A growing literature demonstrates that these models, in addition to their many other uses, are highly proficient in producing persuasive text about political topics (4, 5). As people increasingly interact directly with LLMs that are built into their search engines, operating systems, and other apps, the potential for AI to influence users' political opinionsâand, by extension, collective democratic outcomesâis further amplified.... Hackenburg et al. and Lin et al. conducted large-scale experiments in which survey respondents each had one short, text-based, and multiturn interaction with an LLM that was instructed to persuade the human respondent about a political issue or candidate. Hackenburg et al. conducted more than 77,000 surveys of UK-based respondents, testing the relative persuasiveness of 19 different LLMs and eight different persuasive strategies across ~700 political issues.... Lin et al. tested the ability of LLMs to persuade more than 5800 people about candidates for president or prime minister during elections in the US, Canada, and Poland and 500 people about a local ballot measure in the US. Both Hackenburg et al. and Lin et al. asked respondents to rate the relevant issue or candidate on a 0 to 100 scale before and after the conversation, and both found that interactions with state-of-the-art LLMs move attitudes about a specific political issue roughly 10 points.... Additionally, Lin et al. compared issue-based persuasion with persuasion about candidates for public office and report that the effects of LLM persuasion on attitudes toward candidates are less consistent and are several points smaller on average.... LLMs can produce high-quality text in response to detailed prompts almost instantaneously, which enables efficient and scalable personalized messaging. However, there are concerns that if messages from LLMs are highly personalized, they might negatively affect political reasoning by reducing the range of arguments to which someone is exposed and by appealing to idiosyncratic personal biases.... This builds on previous concerns about online 'echo chambers' (11). Both Hackenburg et al. and Lin et al. incorporated personalization into their experimental designs: A random subset of LLMs received personal information about the user, such as their existing attitudes, partisanship, or other demographic traits, and were prompted to tailor their messages specifically to those individuals.... In addition to examining message personalization, Hackenburg et al. and Lin et al. document the central role of information provision in the persuasiveness of LLM interactions. Of the eight different persuasive strategies tested by Hackenburg et al., the most effective was instructing the model to persuade by providing as much information as possible.... They estimated that each fact-checkable claim was associated with ~0.3 percentage points of attitude change (individual messages could have upward of 20 claims) and concluded that information density is the primary single mechanism accounting for variation in the success of LLM persuasion across models and prompts. Testing the inverse approach, Lin et al. found a substantial reduction in persuasive capacity when an LLM was prompted not to use any factual claims in the persuasive interaction. The fact-based nature of LLM persuasion contrasts with evidence that information is not the dominant persuasive strategy in human interactions (13), which further highlights the complexity and contingency of persuasion research.... Unfortunately, a substantial portion of the information provided by the LLMs in these exchanges was false. Hackenburg et al. used separate, search-enabled LLMs to fact-check more than 460,000 claims made by LLMs in the persuasive exchanges. Depending on the model, between 15 and 40% of informational claims made by the LLM were likely misinformation.... Lin et al. used a similar process and discovered that, in all three countries, LLMs are more likely to produce misinformation in support of candidates or positions on the ideological right. In both studies, the persuasiveness of informational claims did not depend on their accuracyârespondents were just as likely to be persuaded by false information as by true claims. Political decision-making on the basis of fabricated information, particularly when the generation of that information is infused with asymmetric ideological bias, is a fundamental threat to the legitimacy of democratic governance.... Simultaneously, this speaker is highly persuasiveâand seemingly even more so because it is often unconstrained by the truth. Although Hackenburg et al. and Lin et al. provide relative optimism that manipulation through personalization provides limited marginal returns, they also demonstrate that LLMs are able to use a high density of often fabricated information to produce attitude change at scale.",
"date": "2025-12-04",
"last_updated": "2025-12-08"
},
{
"title": "Assessing AI's cognitive abilities for scientific discovery in the field of systems vaccinology",
"url": "https://www.science.org/doi/10.1126/sciimmunol.adx1794",
"snippet": "INTRODUCTION\nThe advent of large language models (LLMs) has considerably transformed the academic landscape, offering researchers tools to enhance their investigative processes. Models such as ChatGPT (OpenAI), Microsoft Copilot (Microsoft Corporation), SciSpace (Typeset.io), and LLaMA (Meta Platforms) are well suited to analyzing and inferring patterns from vast datasets, granting researchers unparalleled access to immunological, molecular, and genomic knowledge and the ability to generate new hypotheses (1 â 5). By analyzing extensive datasets and synthesizing information from diverse sources, these sophisticated artificial intelligence (AI) systems offer scholars insights that may have previously gone unnoticed (5).... LLMs have already demonstrated their potential for scientific discovery in molecular biology. For example, the LLM4SD (LLMs for Scientific Discovery) framework has successfully leveraged LLMs to synthesize knowledge from literature and infer previously unidentified patterns from molecular datasets to improve molecular property prediction (6). By identifying relationships between molecular structures and their biological functions, LLM4SD has outperformed existing benchmarks, demonstrating how AI can generate interpretable insights (6). This success raises the possibility that similar approaches could be applied to other scientific domains, such as vaccinology, where AI could assist in hypothesis-driven research and mechanistic discovery.... In this study, we tested five different LLMsâChatGPT-4o, ChatGPT-4.5, Microsoft Copilot, LLaMA, and SciSpaceâon the basis of a four-tier evaluation framework: (i) accuracy of recall of relevant literature, (ii) formulation of biological hypotheses, (iii) proposal of experiments to validate hypotheses, and (iv) inference of broader conceptual significance of results.... RESULTS\nEvaluating the cognitive potential of LLMs in systems vaccinology\nWe evaluated the cognitive abilities of five LLMsâChatGPT-4o, ChatGPT-4.5, Microsoft Copilot, LLaMA, and SciSpaceâusing a structured framework to analyze molecular signatures previously identified in systems vaccinology studies as predictors of vaccine-induced immune responses. These signatures, derived from multiomics profiling, capture baseline and early postvaccination transcriptional patterns linked to immune magnitude and durability (7, 18).... Evaluating GCN2 analysis: Strengths and limitations of LLMs in immunology research\nThe evaluation of GCN2 across the five LLMsâChatGPT-4o, ChatGPT-4.5, Microsoft Copilot, SciSpace, and LLaMAârevealed a nuanced landscape of capabilities relevant to data mining, hypothesis formulation, experimental design, and conceptual significance in the context of immunology research.... ChatGPT-4o, Microsoft Copilot, and SciSpace demonstrated strong proficiency in identifying relevant literature and extracting key data points related to GCN2 functions, each achieving 100% validity, and in ability to retrieve existing and appropriate literature, based on the reference dataset (table S1), as illustrated in Fig.... When considering experimental design capabilities, the top three models outlined coherent and feasible experimental approaches (Fig. 3C). ChatGPT-4.5 went further by proposing a detailed and technically robust pipeline for investigating GCN2-driven extracellular vesicle signaling, encompassing peripheral blood mononuclear cell (PBMC) culture, extracellular vesicle (EV) isolation, multiomics cargo profiling, and in vivo validation, summarized in fig. S1B. In contrast, LLaMA's experimental designs were often biologically unrealistic or lacked necessary methodological detail.... Last, regarding conceptual importance, ChatGPT-4.5 provided the strongest and most integrated framing of GCN2's immunological roles, including translational perspectives such as using EV cargo signatures as biomarkers of vaccine efficacy or potential therapeutic vectors (Fig. 3D and fig. S1B). ChatGPT-4o, Microsoft Copilot, and SciSpace also contributed contextually relevant insights, whereas LLaMA's contributions were limited in depth and specificity.... Overall, this comparative evaluation highlights the importance of leveraging advanced LLMs, such as ChatGPT-4o and ChatGPT-4.5, for comprehensive immunology research, while noting the variable capabilities of other models.\nSREBP signaling: A metabolic key to B cell responses and vaccine efficacy... 5D and table S2). Meanwhile, results from ChatGPT-4.5 reinforced these findings with more detailed analyses of hypothesis formulation and experimental design, underscoring the progression in LLM capabilities (Fig. 5, B and C, and fig. S2, A and B). Overall, this evaluation highlights that ChatGPT-4o, Microsoft Copilot, and SciSpace excel in mining and integrating knowledge consistent with the literature, whereas LLaMA requires improvements.... In parallel, a comparative assessment involving ChatGPT-4o, ChatGPT-4.5, Microsoft Copilot, SciSpace, and LLaMA is presented in Fig. 7, highlighting their varied strengths and weaknesses across four key evaluation tiers related to TLR5's rol",
"date": "2025-12-05",
"last_updated": "2025-12-08"
},
{
"title": "Smart chains: Designer antibodies shaped by AI",
"url": "https://www.science.org/doi/10.1126/sciimmunol.aee1940",
"snippet": "In 1890, Behring and Kitasato generated antibodies by injecting inactivated bacterial toxins into animals. The activation of selected B lymphocytes by antigen is followed by the somatic hypermutation of antibody genes and a Darwinian selection process that only chooses and expands B cells that have evolved to secrete high-affinity antibodies.... Now, Bennett et al. have used in silico artificial intelligence (AI)âbased approaches to create, literally from scratch, the binding sites of antibodies to specific epitopes on influenza hemagglutinin and Clostridium difficile toxin Bâno immunization, no adjuvants, no animals, no random library screening.... This group had previously developed an AI platform called RFdiffusion to design proteins that bind to specific targets. Previous AI approaches to protein design, including RFdiffusion, relied heavily on modeling interactions based on amino acid residues that make up regular secondary structure elements, either beta strands or alpha helices. However, whereas the framework regions of immunoglobulin domains are made up primarily of beta strands, the binding site of the antibody is created by the CDR1, CDR2, and CDR3 loops; loops are more difficult to model predictively.... To overcome this, the authors fine-tuned RFdiffusion by training it on known antibody-antigen complexes, resulting in a version that performed more accurately in modeling loops around specific protein epitopes.... The authors then leveraged another deep learningâbased tool they had developed a few years earlier called ProteinMPNN to design CDR loop sequences that bind epitopes of interest. Importantly, they then experimentally produced and tested these de novo antibodiesâusing yeast display, SPR, and cryo-EMâto confirm that the designed molecules bound their intended epitopes as predicted.... Last, the structures and orientations of these de novo antibodies when bound to the epitope of interest were validated using a structure prediction model called RoseTTAFold2 to filter for designs that bound the epitope as intended. Given the substantial medical impact of therapeutic antibodies, this technique has the potential to transform biologic drug discovery into drug creation.",
"date": "2025-12-05",
"last_updated": "2025-12-07"
},
{
"title": "Quantum-inspired computational wavefront shaping enables turbulence-resilient distributed aperture synthesis imaging",
"url": "https://www.science.org/doi/10.1126/sciadv.aea4152",
"snippet": "This synergy merges the aberration cancellation of quantum-inspired correlation with the scalability of computational optics. Crucially, by eliminating the need for physical SLMs or array detectors, QiCWS enables aberration correction using only a single-pixel detector. This capability permits operation under low photon flux and across a broad spectrum from deep ultraviolet to terahertz, where array sensors are prohibitively expensive or impractical (21). Together, these advantages position QiCWS as a solution for diverse wavefront shaping applications.... DOASI overcomes these limitations by deploying a phase-randomly modulated laser array to illuminate objects through turbulence, while single-pixel detection and computational correlation retrieve diffraction-limited images without physical phasing or array/scanning sensors. By transforming hardware bottlenecks into computational tasks, QiCWS enables turbulence-resilient standoff imaging at the theoretical resolution limit of the synthetic aperture.... Here, âšÂ·â© denotes the average over all M samplings. Ir(Ïâr,m) represents the m th { m = 1, 2, âŠ, M } computed reference pattern, which is a fully virtual process. B(m) is the bucket signal captured during the m th illumination. Unknown aberrations prevent accurate computational prediction of illumination patterns at the target plane, causing conventional CGI failures (25, 26). QiCWS resolves this challenge by introducing a virtual SLM in the reference arm to computationally correct signal-path aberrations.... DISCUSSION\nRedefining wavefront correction: From hardware to computation\nOur QiCWS decouples turbulence resilience from physical optics constraints. It achieves diffraction-limited imaging (Fig. 5E) through complex aberrations using only single-pixel detection and computational phase conjugation. This performance is comparable to high-order AO (24). Unlike conventional AO systems that rely on deformable mirrors or SLMs (45), QiCWS virtualizes aberration correction by exploiting the computational second-order coherence [ g (2) ] of the classical correlated light.... This considerably decreases wavefront correction latency by eliminating the need for real-time physical modulators. It also reduces the sensing complexity, requiring no wavefront sensors or guide stars that are indispensable for regular AO (24) or wavefront shaping (47 â 49). Last, QiCWS eliminates the subwavelength stability required in interferometry (22, 23) via incoherent correlation synthesis. All these represent a paradigm shift from hardware-centric correction to algorithmic resilience, enabling high-resolution imaging in scenarios where deformable mirrors or SLMs are impractical (e.g., nanosatellite swarms or endoscopic probes).... Breaking interferometric stability barriers\nDistributed aperture synthesis traditionally demands subwavelength path-length stability (~λ/10) (22), creating technological barriers and restricting application scenarios (23). DOASI eliminates this constraint by leveraging virtual phase conjugation to computationally compensate path-length differences, digitally. This relaxes the path-length stability requirement to the coherence length of a nanosecond laser pulse (see Methods for details), which is orders of magnitude longer than needed for Fizeau or Michelson interferometers (22, 23).... DOASI bypasses this constraint, enabling larger apertures to collect more light, which is crucial in long-range imaging. These advantages open avenues for long-baseline interferometry in unconstrained environments.\nConvergence with computational imaging... This approach provides a foundation for developing physics-driven neural networks to further enhance imaging performance (52). For synthetic apertures, DOASI achieves resolution scaling (r â λ/ D; Fig. 4) comparable to multi-telescope interferometers but with minimal hardware. QiCWS bridges optical physics and computational imaging, which brings promising potential to the field. By shifting the hardware burden to a computational process, QiCWS eliminates the need for hardware feedback. These stems overcome the speed constraints imposed by physical SLM, which is a critical advantage for real-time applications such as free-space communications or retinal tracking.... Summary\nMotivated by the fundamental limitations of physical wavefront modulators in dynamic environments, we demonstrate QiCWS, a paradigm-shifting approach. By leveraging quantum-inspired second-order coherence, QiCWS virtualizes aberration correction to replace deformable mirrors or SLMs with algorithmic optimization of correlation optics. This methodology exploits pseudothermal illumination and single-pixel detection to computationally impose virtual phase masks, enabling turbulence-resilient DOASI.... However, this issue is readily addressable given the gigahertz rate of modern single-pixel detectors. Once data acquisition is completed within the quasi-stationary period of the time-varying aberration, the offline optimization nevertheless preserves the ability to correct time-varying aberrations. In our experiment, the genetic algorithm (population size, 128, ~500 generations) used to optimize the compensation phase takes about 10 min on a PC (3.69 GHz CPU, 128-GB RAM). Once the compensation phase is determined, the final image can be reconstructed within seconds (see the Supplementary Materials for details).",
"date": "2025-12-03",
"last_updated": "2025-12-05"
},
{
"title": "Ultrasoft hydrogel immune millirobot with multimodal locomotion",
"url": "https://www.science.org/doi/10.1126/sciadv.adw9133",
"snippet": "Abstract\nAdvancements in cellular immunotherapy demanded efficient immune cell delivery. To meet this need, we introduced hydrogel-based immune millirobots designed for high immune cell loading and precise tumor targeting. These ultrasoft robots, embedded with magnetic nanoparticles, exhibited adaptable locomotion: walking, rolling, climbing, and undulating, enabling navigation through complex biological environments and alignment with varied tumor morphologies.... They responded to magnetic fields and ionic or pH changes, facilitating propulsion, grasping, and localized delivery. In vitro, the millirobots eradicated three-dimensional tumor models in four days; in vivo, they notably reduced tumor growth in HepG2-luc tumor-bearing nude mice within 15 days. Bioluminescence imaging confirmed enhanced natural killer cell activity at tumor sites. The robots demonstrated excellent biocompatibility and biodegradability and caused no adverse effects postimplantation. This work showcased a responsive, soft robotic system with potential for advancing immune cell delivery and exploring tumor-immune dynamics in cancer therapy.... This modular configuration not only emulated the locomotion and feeding mechanisms of a starfish but also, crucially, separated the movement module from the cell-loading module, helping preserve the activity of the carried cells.\nThe structure design and magnetic actuation... 2E). The walking, undulating, and crawling speed reached 7.62, 1.73, and 7.11 mm/s, respectively (Fig. 2, F to H). The best working frequency for these four locomotion modes was 3 to 7 Hz. Through these demonstrations, we underscored the HIM's adaptability and precision in traversing complex paths with diverse shapes and orientations, marking an avenue in the field of biomimetic robotics.... The HIM showcased proficiency in navigating confined spaces, deftly sidling through a narrow 2-mm-width slit (Fig. 3E). Leveraging its hydrogel structure and integrated immune cells, reminiscent of starfish anatomy, the HIM adeptly adapted its locomotion strategy to traverse channels of varying widths. Further highlighting its adaptability, the HIM dynamically adjusted its form to pass through a narrowing tunnel with inner diameters ranging from 8 to 2 mm (Fig. 3F), showcasing its supersoftness and ability to respond to changing environmental conditions.... Last, the HIM successfully maneuvered within a 3-mm-diameter channel (Fig. 3G), demonstrating its adaptability to confined biological spaces that closely mimicked in vivo structures such as gastrointestinal (GI) lumen folds, intestinal crypts, and intertissue clefts in the peritoneal cavity. These features commonly ranged from submillimeter to several millimeters in scale, presented topographical and spatial challenges for targeted delivery systems.... The ability of the HIM to traverse these constraints highlighted its biomimetic adaptability and suitability for in vivo navigation through complex, uneven microenvironments. Together with its soft, biocompatible design and immune cellâcarrying capability, these results supported the HIM's potential as a robust and versatile robotic platform for biomedical applications, including immune cellâbased therapies in anatomically constrained regions.... Time-lapse images captured from 0 to 25 s illustrate the trajectory of the HIM, marked by dashed circles. Despite the moderate imaging contrast caused by overlapping soft tissues, the HIM remained distinguishable to the naked eye throughout the experiment. These results confirmed the feasibility of controlled microrobot locomotion in vivo using clinical-grade imaging techniques. These observations not only highlighted the effective design and functionality of the HIM but also demonstrated its potential applications in targeted drug delivery and therapeutic interventions within the GI system. Through this study, we have demonstrated the feasibility of using the HIM in unstructured biological environments.... As quantitatively demonstrated in fig. S16, our HIM overcame the traditional engineering compromise between miniaturization and functional payload capacity that has constrained previous microrobotic systems (4, 22, 25, 42, 56, 57).... This unique integration of substantial payload capacity and advanced trafficability positions the HIM platform as a promising technology for precision immune modulation in anatomically challenging therapeutic sites, including mucosal surfaces and postresection cavities, where localized immune activation is most needed.... Its ability to navigate within GI bio-microenvironments, combined with its self-shaping capabilities in response to chemical and pH changes, made it a promising tool for investigating small-scale soft-bodied locomotion and enhancing the effectiveness of targeted therapies. With its multimodal locomotion capabilities, the HIM could navigate flexibly through the GI tract, facilitating early treatment of primary tumors.... Furthermore, the HIM's small size and hydrogel composition resulted in poor visibility under clinical imaging, highlighting the need for material optimization with contrast-enhancing agents and integration with real-time imaging systems such as DSA. To overcome these limitations, future work will focus on developing closed-loop control systems that actively compensate for physiological motions and enhance positional accuracy. Additionally, testing in more complex animal models will be essential to evaluate long-term stability, biodegradability, and anatomical adaptability, ultimately advancing the system toward reliable minimally invasive therapeutic delivery.",
"date": "2025-12-03",
"last_updated": "2025-12-07"
},
{
"title": "The levers of political persuasion with conversational artificial intelligence",
"url": "https://www.science.org/doi/10.1126/science.aea3884",
"snippet": "CONCLUSION\nOur findings suggest that the persuasive power of current and near-future AI is likely to stem less from model scale or personalization and more from post-training and prompting techniques that mobilize an LLM's ability to rapidly generate information during conversation. Further, we reveal a troubling trade-off: When AI systems are optimized for persuasion, they may increasingly deploy misleading or false information. This research provides an empirical foundation for policy-makers and technologists to anticipate and address the challenges of AI-driven persuasion, and it highlights the need for safeguards that balance AI's legitimate uses in political discourse with protections against manipulation and misinformation.... Abstract\nThere are widespread fears that conversational artificial intelligence (AI) could soon exert unprecedented influence over human beliefs. In this work, in three large-scale experiments (N = 76,977 participants), we deployed 19 large language models (LLMs)âincluding some post-trained explicitly for persuasionâto evaluate their persuasiveness on 707 political issues.... We then checked the factual accuracy of 466,769 resulting LLM claims. We show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methodsâwhich boosted persuasiveness by as much as 51 and 27%, respectivelyâthan from personalization or increasing model scale, which had smaller effects. We further show that these methods increased persuasion by exploiting LLMs' ability to rapidly access and strategically deploy information and that, notably, where they increased AI persuasiveness, they also systematically decreased factual accuracy.... Academics, policy-makers, and technologists fear that artificial intelligence (AI) may soon be capable of exerting substantial persuasive influence over people (1 â 13). Large language models (LLMs) can now engage in sophisticated interactive dialogue, enabling a powerful mode of human-to-human persuasion (14 â 16) to be deployed at unprecedented scale.... To do so, we examine three fundamental research questions (RQs) related to distinct risks. First, if the persuasiveness of conversational AI models increases at a rapid pace as models grow larger and more sophisticated, this could confer a substantial persuasive advantage to powerful actors who are best able to control or otherwise access the largest models, further concentrating their power.... The resulting dataset is, to our knowledge, the largest and most systematic investigation of AI persuasion to date, offering an unprecedented window into how and when conversational AI can influence human beliefs. Our findings thus provide a foundation for anticipating how persuasive capabilities could evolve as AI models continue to develop and proliferate and help identify which areas may deserve particular attention from researchers, policy-makers, and technologists concerned about its societal impact.... For example, GPT-4o (27 March 2025) is more persuasive (11.76 percentage points) than models thought to be considerably larger in scale, GPT-4.5 (10.51 percentage points, difference test P = 0.004) and Grok-3 (9.05... This is smaller than the difference in persuasiveness that we observed between two equal-scale deployments of GPT-4o in study 3 that otherwise varied only in their post-training: 4o (March 2025) versus 4o (August 2024) (+3.50 percentage points in a head-to-head difference test, P < 0.001; supplementary materials, section 2.3.2). Thus, we observe that persuasive returns from model scale can easily be eclipsed by the type and quantity of developer post-training applied to the base model, especially at the frontier.... Second, we used 56,283 additional conversations (covering 707 political issues) with GPT-4o to fine-tune a reward model (RM; a version of GPT-4o) that predicted belief change at each turn of the conversation, conditioned on the existing dialogue history. This allowed us to enhance persuasiveness by sampling a minimum of 12 possible AI responses at each dialogue turn, and choosing the response that the RM predicted would be most persuasive (materials and methods).... Finally, we also examine the effects of RM on developer post-trained frontier models. (Many of these models are closed-source, rendering SFT infeasible.) Specifically, we compare base versus RM-tuned models for GPT-3.5, GPT-4o (August 2024), and GPT-4.5 in study 2 and for GPT-4o (August 2024 and March 2025), GPT-4.5, and Grok-3 in study 3.... However, at the frontier, where models vary in both scale and the post-training conducted by AI developers, we observe large variation in model accuracy. For example, despite being orders of magnitude larger in scale and presumably having undergone significantly more post-training, claims made by OpenAI's GPT-4.5 (study 2) were rated inaccurate >30% of the timeâa figure roughly equivalent to our much smaller chat-tuned version of Llama3.1-8B.",
"date": "2025-12-04",
"last_updated": "2025-12-08"
}
],
"id": "51de457f-9a7e-4656-b838-fae4d020c7be"
}