Christ is my all
2206 stories
·
3 followers

Scientists Can Now 3D Print Tissues Directly Inside the Body—No Surgery Needed

1 Share

A new bioprinter uses ultrasound to print tissues, biosensors, and medication depots deep in the body.

Our bodies are constantly breaking down. Over time, their built-in repair mechanisms also fail. Knee cartilage grinds away. Hip joints no longer support weight. Treatments for breast cancer and other health issues require removal by surgery. Because the body can’t regenerate those tissues, reconstruction using biomaterials is often the only way.

Traditionally, this kind of restoration involves one-size-fits-all breast implants or hip joints. More recently, 3D bioprinted tissues have begun to be tailored for patients. But these artificial tissues are printed outside of the body, and they still require additional surgeries to implant, adding to the chances of scarring, inflammation, or infection, and increasing healing time.

This month, a team from the California Institute of Technology unveiled a system to 3D print tissues inside the body—no surgery needed. Dubbed deep tissue in vivo sound printing (DISP), the system uses an injectable bioink that’s liquid at body temperature but solidifies into structures when blasted with ultrasound. A monitoring molecule, also sensitive to ultrasound, tracks tissue printing in real time. Excess bioink is safely broken down by the body.

In tests, the team 3D printed tissues inside a rabbit’s stomach and mouse’s bladder. They also added conductive nanoparticles to make soft biosensors and depots of medication—anticancer drugs or antibacterial medications—that released their payloads when hit with ultrasound.

“This work has really expanded the scope of ultrasound-based printing and shown its translational capacity,” Yu Shrike Zhang at Harvard Medical School, who was not involved in the research, told IEEE Spectrum. “It’s quite exciting.”

From Light to Sound

Thanks to its versatility, 3D printing has captured the imagination of bioengineers. The technology can be used to make artificial biological tissues, organs, or medical devices.

Bioprinters usually deposit one layer at a time. Each layer is solidified using light, then the next layer is laid on top of it. This layer-by-layer process takes time. More recently, an upgraded method called volumetric printing solidifies 3D structures with a single blast of carefully tailored light. The approach is faster but also constrained by how deeply light can pierce tissues.

Infrared light, for example, can shape implants under a thin layer of skin and muscle, Xiao Kuang at the University of Wisconsin–Madison, who was not involved in the study, wrote in Science. But light dims and scatters the deeper it goes inside the body. This limits the “direct printing of implants beneath millimeter-thick tissues”—or barely below the skin.

Ultrasound, best known for its use monitoring pregnancies, has an advantage here. It can reach deep into organs—nearly eight inches—without damaging them. Scientists have been exploring focused ultrasound, which blasts a certain frequency of soundwaves towards a tissue, as a way to monitor brain and muscle activity.

Ultrasound can also trigger chemical reactions. In 2023, Zhang and colleagues engineered a molecular concoction dubbed “sono-ink” that solidifies when blasted with a specific frequency of sound waves. The team 3D printed multiple shapes inside isolated pork belly, liver, and kidneys and patched tissue damage in a goat heart.

But the ink was sensitive to stress and other disruptions in the body, resulting in slower printing and poor resolution. Sound waves also generate heat, which hardened some of the sono-ink before it had formed the intended structures. Further, other molecules in the ink and local heat spikes across tissues increased biocompatibility risk.

“Ultrasound 3D printing inside a body is more challenging than it seems,” wrote Kuang.

A New Ink

The new system relies on upgraded sono-ink.

The ink has multiple components combined into a single concoction. First up are chains of molecules that normally float about freely but grasp each other when given a molecular cue. These are accompanied by fatty bubbles filled with binder molecules—the molecular cue—that release their payloads when exposed to ultrasound. A final encapsulated component includes multiple chemicals that scatter sound waves and light up when certain soundwaves hit. These help the team visualize the ink’s location and determine if it’s formed the desired structure.

The new setup “prevented premature chemical reactions at body temperature and provided better control of the printing process,” wrote Kuang.

The ink is injected to the target site or shuttled there using catheters. To test the approach, the team 3D printed shapes—stars, pinwheels, teardrops, and lattices—in a variety of tissues, such as thick pork chops and chicken.

Compared to previous light-based methods that only reached fatty tissues, the new technology made it into muscles and more precisely activated the bioink to form shapes. The system works at roughly 40 millimeters a second, the average speed of an inkjet printer.

A Sound Treatment

In mice with bladder cancer, the team 3D printed a kind of depot that slowly released anticancer drugs to stave off cancer growth. Today’s bladder cancer drugs are often flushed away within hours. A bioink patch would concentrate the drugs at the tumor for longer.

In another test, the researchers printed artificial tissue inches below the skin in rabbits’ leg muscles and bellies, showcasing the technology’s ability to restructure tissues in larger animals.

The bioink can also be customized. It can include carbon nanotubes, nanowires, and other biocompatible structures for bioengineering. In one test, the team loaded the ink with conductive nanoparticles and printed electronic sensors to measure the activity of living tissues.

“This functionality could be useful for tests that monitor physiological signals,” wrote Kuang, such as those that monitor heart and muscle health.

The ink is shelf-stable for at least 450 days and doesn’t seem to trigger immune responses. The body eliminates excess ink through normal metabolism or it can be dissolved with a treatment normally used to counter heavy-metal poisoning.

There are still some kinks to iron out. Target tissues differ in depth, shape, and size, which could alter how the ultrasound bounces and cures the bioink. Printing on organs that move—our lungs, heart, and stomach—is even more complex.

In this respect, AI may help, wrote Kuang. Algorithms could decipher the links between sound waves, temperature, materials, and bodily interactions to better simulate and guide printing. An AI-based controller that combines real-time monitoring of the process could rapidly adjust to changes in the body’s status so the bioink solidifies as planned.

Although there’s a ways to go before clinical use, the team says their recent work shows the technology is versatile enough to be widely useful.

The post Scientists Can Now 3D Print Tissues Directly Inside the Body—No Surgery Needed appeared first on SingularityHub.



Read the whole story
· · · ·
rtreborb
1 day ago
reply
San Antonio, TX
Share this story
Delete

'Run! Get Out While You Can!' Scream Dying Ferns As Woman Brings In New Houseplant

1 Share

DENVER, CO — Local woman Shaylyn Bridger brought home three new houseplants this week, prompting her half-dead ferns to scream at them to get out while they still could.

Read the whole story
rtreborb
2 days ago
reply
San Antonio, TX
Share this story
Delete

EPA head says agency will "fix" rules which created cars that automatically turn off every time you stop 🙌

1 Share

PRAISE THE LORD.

Read the whole story
rtreborb
2 days ago
reply
San Antonio, TX
Share this story
Delete

Real‑world video demo: Using different AI models in GitHub Copilot

2 Shares

Claude 3.7 Sonnet, Gemini 2.5 Pro, GPT-4… developer choice is key to GitHub Copilot, and that’s especially true when it comes to picking your frontier model of choice. 

But with so many frontier generative AI models now available to use with GitHub Copilot (and more coming seemingly every day), how do you pick the right one for the job—especially with the growing capabilities of Copilot Chat, edit, ask, and agent modes?

In a recent video, I worked with GitHub’s Developer Advocate Kedasha Kerr (aka @ladykerr) to answer this exact question. Our goal? To build the same travel‑reservation app three different ways with Copilot ask, edit, and agent modes while swapping between Copilot’s growing roster of foundation models to compare each AI model in real-world development workflows. 

We set out to build a very simple travel‑reservation web app (think “browse hotel rooms, pick dates, book a room”). To keep the demo snappy, we chose a lightweight stack:

  • Backend: Flask REST API
  • Frontend: Vue.js, styled with Tailwind
  • Data: a local  data.json file instead of a real database

That gave us just enough surface area to compare models while scaffolding the app, wiring up endpoints, and adding tests, docs, and security tweaks along the way . 

Here are a few key takeaways from our video (which you should watch). 

But first, let’s talk about Copilot’s three modes

GitHub Copilot gives you three distinct “modes:” ask, edit, and agent mode.  Ask is there to answer questions, edit is a precise code‑rewriting scalpel, and agent mode can drive an entire task from your prompt to the finished commit. Think of it this way: Ask answers, edit assists, agent executes. 

What it does (nuts & bolts)Ideal moments to reach for it
Ask modeAnalyzes the code you highlight (or the context of your open file) and returns explanations, examples, or quick fixes without touching your code. No diffs, and no saving. It’s just conversational answers.• Debug a puzzling stack trace
• Refresh your memory on a library or pattern
• Grab a snippet or algorithm on the fly
Edit modeYou select one or more files, describe a change in a plain-language prompt, and Copilot applies inline edits across those files. But first, it shows you a diff, so you can approve every change.• Add error handling or refactor repetitive code
• Tight, multi‑file tweaks in a brown‑field codebase
• Apply team style rules via custom instructions
Agent modeFeed it a high‑level prompt and Copilot plans steps, runs terminal commands, edits multiple files, and keeps iterating autonomously while surfacing risky commands for review. Great for project‑wide, multi‑step work.• Scaffold a new service or feature from a README
• Large bug fixes that touch many modules
• Automated clean ups (e.g., migrate to Tailwind everywhere)

Tip 1: No matter what model you use, context matters more than you think

The model you use is far from the only variable, and the context you offer your model of choice is often one of the most important elements. 

That means the way you shape your prompt—and the context you provide Copilot with your prompt and additional files—makes a big difference in output quality. By toggling between capabilities, such as Copilot agent or edit mode, and switching models mid-session, we explored how Copilot responds when fed just the right amount of detail—or when asked to think a few steps ahead.

Our demo underscores that different modes impact results, and thoughtful prompting can dramatically change a model’s behavior (especially in complex or ambiguous coding tasks). 

The takeaway: If you’re not shaping your prompts and context deliberately, you’re probably leaving performance on the table.

For a deeper dive into model choice, the guide “Which AI model should I use with GitHub Copilot?” offers a comprehensive breakdown.

Tip 2: Copilot agent mode is a powerful tool

Agent mode, which is still relatively new and evolving fast, allows Copilot to operate more autonomously by navigating files, making changes, and performing repository-wide tasks with minimal hand holding. 

This mode opens up new workflow possibilities (especially for repetitive or large-scale changes). But it also demands a different kind of trust and supervision. Seeing it in action helps demystify where it fits in your workflows.

Here are two ways we used agent mode in our demo: 

  • One‑click project scaffolding: Kedasha highlighted the project README and simply told Copilot “implement this.” Agent mode (running Gemini 2.5 Pro) created the entire Flask and Vue repository with directories, boiler‑plate code, unit tests, and even seeded data. 
  • End‑to‑end technical docs: I started using agent mode with Claude 3.5 and prompted: “Make documentation for this app … include workflow diagrams in Mermaid.” Copilot generated a polished README, API reference, and two Mermaid sequence/flow diagrams, then opened a preview so I could render the charts before committing .

Tip 3: Use custom instructions to set your ground rules

Another insight from the session is just how much mileage you can get from customizing Copilot’s behavior with custom instructions. 

If you don’t know, custom instructions let you lay down the rules before Copilot suggests anything (like how APIs need to be called, naming conventions, and style standards). 

Kedasha in particular underscored how custom instructions can tailor tone, code style, and task focus to fit your workflow—or your team’s. 

One example? Using custom instructions to give every model the same ground rules, so swaps between each model produced consistent, secure code without re‑explaining standards each time.

Whether you’re nudging Copilot to avoid over-explaining, stick to a certain stack, or adopt a consistent commenting voice, the customization options are more powerful than most people realize. If you haven’t personalized Copilot yet, try custom instructions (and check out our Docs on them to get started).

Tip 4: The balance between speed vs. output quality

No matter what model you use, there are always tradeoffs between responsiveness, completeness, and confidence. A larger model may not provide quick suggestions when you’re working through an edit, for instance—but a smaller model may not offer the best refactoring suggestions, even if it’s faster in practice. 

TL;DR: It’s not about chasing the “best” model—it’s about knowing when to switch, and why. Your default model might work 80% of the time—but having others on deck lets you handle edge cases faster and better.

Take this with you

This video demo isn’t a scripted feature demo. It’s two devs using Copilot the way you would—navigating unknowns, poking at what’s possible, and figuring out how to get better results by working smarter instead of harder.

If you’ve been sticking with the defaults or haven’t explored multi-model workflows, this is your invitation to take things further.

👉 Watch the full video to see how we put Copilot to work—and got more out of every mode, prompt, and model. 

Now—what will you build? Try GitHub Copilot to get started (we have a free tier that’s pretty great, too). 

Additional resources: 

The post Real‑world video demo: Using different AI models in GitHub Copilot appeared first on The GitHub Blog.

Read the whole story
· · · · · ·
rtreborb
3 days ago
reply
San Antonio, TX
Share this story
Delete

Apple settles lawsuit over Siri possibly spying on your conversations and you might be able to cash in on it

1 Share

If ever you've thought that your AI may have been listening in on your private conversations...

Read the whole story
rtreborb
3 days ago
reply
San Antonio, TX
Share this story
Delete

Pepsi was just knocked off the cola podium. A new soft drink has taken third place.

1 Share

Well, it brings me no pleasure to bag on Pepsi but the people have spoken with their almighty dollars.

Read the whole story
rtreborb
3 days ago
reply
San Antonio, TX
Share this story
Delete
Next Page of Stories
Loading...