Nottingham University researchers have received £902,524 funding from the Medical Research Council to develop the smart wound dressing that uses the sensors to assess whether affected tissue is healing well or is infected.
The dressing could have a significant impact on patient care and healthcare costs for wound management, which stand at £4.5-£5.1bn a year, which is over four per cent of the NHS budget.
Diabetic foot ulcers represent nearly £1bn of this cost and these wounds will be the initial focus of the project. Better wound monitoring has the potential to reduce the 7,000 lower limb amputations that affect people with diabetes in England each year.
The optical fibre sensors in the dressing remotely monitor multiple biomarkers associated with wound management such as temperature, humidity and pH, providing a more complete picture of the healing process.
“At present, regular wound redressing is the only way to visually assess healing rates, however this exposure can encourage infection, disrupt progress and creates a huge economic burden on NHS resources. Instead our technology will indicate the optimum time to change the dressing and send out an alert if intervention is required with infected or slow-healing wounds to improve patient care and cut the number of healthcare appointments needed,” said Professor Steve Morgan, director of the Centre for Healthcare Technologies and Royal Society Industry Fellow at the University.
Developed and validated by the Centre in laboratory tests, the proposed sensors will be fabricated in very thin (~100um diameter), lightweight, flexible, low-cost optical fibres. This versatile platform will then be incorporated into fabric that will look and feel the same as a conventional wound dressing.
The dressing will be connected to a standalone, reusable opto-electronic unit to constantly evaluate the wound’s status. The unit will transmit and receive light to and from the sensors; relaying information to the patient and clinicians. This will be achieved by wireless transfer linked to a mobile phone.
Although the dressing will cost marginally more than the average dressing, the higher initial cost will be offset by fewer dressing changes or clinical visits and reduced healing time. A 10 per cent reduction in costs associated with visits and appointments would provide £300m annual savings to the NHS alone.
The project team includes researchers from University Hospitals of Derby and Burton NHS Foundation Trust, Nottingham University Hospitals Trust, and industrial partners Footfalls and Heartbeats (UK).
An innovative new temperature sensitive fabric will adjust how much heat it holds based on the wearer’s body heat.
The fabric was created by a team from the University of Maryland using fibers of two different material—one that repels water and one that absorbs it. The fibers were then coated in carbon nanotubes and woven to form the fabric. When a liquid, such as sweat, contacts the fibers, they will clump together and leave gaps in the fabric. At the same time, the movement of the carbon nanotubes closer together changes the electromagnetic coupling so that the nanotubes absorb 35 percent more infrared radiation.
We’re not wired to think exponentially, and this trivial thought experiment will prove just how hard it is. We most often get the future wrong because, to paraphrase economist Paul Romer, “Opportunities don’t add up, they multiply.” Stick with me for a few minutes because the math behind that isn’t as straight forward as it …
While the advent of 3D printers is commonly thought of as a revolution for manufacturing, it could have huge benefits for medicine as well. To help patch up large wounds that might normally require a skin graft, researchers at Wake Forest Institute for Regenerative Medicine (WFIRM) have developed a new bioprinter that can print dual layers of a patient’s own skin directly into a wound.
The idea of 3D printing skin has been in development for a few years. In 2014, a prototype machine was unveiled that could print large sheets of human skin that could then be cut to size and grafted onto a patient. The tech evolved over the years into more detailed machines and eventually a handheld device that works like a tape dispenser for skin.
The new machine looks like a cross between those last two. It’s much larger than the handheld device, but it’s still relatively portable in a hospital setting. The machine can be wheeled to a bedside, and a patient lies underneath the printer nozzle while it goes to work.
Like earlier devices, the new printer uses an “ink” made up of a patient’s own cells, to minimize the risk of rejection. First a small biopsy of healthy skin is taken, and from that two types of skin cells can be isolated: fibroblasts, the cells that help build the structure to heal wounds, and keratinocytes, which are the main cells found in the outermost layer of skin.
Larger amounts of these cells are grown from the biopsy sample, then mixed into a hydrogel to form the bioprinter ink. And here’s where it differs from previous bioprinters – rather than just applying the new skin over the injury, the new machine first uses a 3D laser scanner to build a picture of the topology of the wound. Using that image, the device then fills in the deepest parts with the fibroblasts, before layering keratinocytes over the top.
That technique mimics the natural structure of skin cells, allowing the injury to heal faster. The team demonstrated that it works using mouse models, observing that new skin began to form outward from the center of the wound. Notably, it only worked when the ink was made using the patient’s own cells – in other experiments the tissue was rejected by the body.
“If you deliver the patient’s own cells, they do actively contribute to wound healing by organizing up front to start the healing process much faster,” says James Yoo, co-author of the paper. “While there are other types of wound healing products available to treat wounds and help them close, those products don’t actually contribute directly to the creation of skin.”
The researchers say that the next steps involve conducting clinical trials in humans. Eventually, the new device could be put to work treating burn victims, patients with diabetic ulcers and other large wounds that have trouble healing on their own.
Every organization wants to be a leader in their industry. One way to get there? Creating an innovative, user-friendly site that thrills your audience. If you’re like most organizations, you’ve used data to guide your website improvement efforts. Web analytics tools such as Google Analytics are powerful and provide essential information. However, that data only …
It’s not at all uncommon for autistic children to have phobias. Unfortunately, these fears can sometimes be so severe that the children have to completely avoid commonly-experienced situations such as taking buses or going into shops. A relatively new immersive reality treatment, however, is showing great promise in addressing the problem.
Known as the Blue Room, the system was developed in a partnership between Britain’s Newcastle University and tech firm Third Eye Technologies. Although it’s been around in experimental form for the past few years, it has just become available to patients via the UK’s National Health Service.
Here’s how it works …
A child and a psychologist sit together in a small room, which has virtual reality animation projected onto all four of its walls. This immerses the child in a 360-degree interactive display without requiring them to wear VR goggles, which autistic children often don’t want to do.
Using an iPad to control the action, the psychologist guides the child through simulations of experiences that they find distressing, helping them to control their anxiety by doing things such as performing breathing exercises. The child’s parents watch on a closed-circuit video feed, so that they can see what coping strategies are used.
As the child becomes more comfortable with the situations, the complexity and noise levels of the simulations can be gradually increased, until they match those of the real world.
In a 2014 study of the technology, it was found that Blue Room treatment allowed eight out of nine children to overcome their fears, with some of them still unafraid of their given situation up to a year later. A significantly larger clinical study is now in progress.
“Situation-specific anxieties, fears and phobias can completely stop a child with autism taking part in normal family or school life and there are very few treatment options for them,” says Newcastle’s Dr. Jeremy Parr. “Currently the main treatment is cognitive behaviour therapy, but that often doesn’t work for a child with autism … People with autism can find imagining a scene difficult, so by providing it physically in front of the child’s eyes we can sit alongside them and help them learn how to manage their fears.”
Lightweight and shatterproof, polyethylene terephthalate (PET) plastic isrecyclable, although most items made from it don’t get recycled. This is because reclaimed PET (rPET) just isn’t as good as the original material. A new “upcycling” process, however, is claimed to make it even better.
Developed by scientists at the US Department of Energy’s National Renewable Energy Laboratory (NREL), the technique involves first melting down discarded PET items such as bottles, then adding organic fibers obtained from plant waste. The ultimate end products are two types of fiber-reinforced rPET, which are said to be two to three times stronger and more durable than the original.
Additionally, it is estimated that the NREL technique will require 57 percent less energy than existing PET-reclamation processes, and that it will produce 40 percent fewer greenhouse gas emissions than standard petroleum-based composites manufacturing.
Although it’s currently not possible to recycle the new types of rPET, the researchers are looking into methods of doing so. Existing first-generation PET can only be recycled once or twice, and as mentioned earlier, standard recycling techniques result in a material that is inferior in quality to the original.
“Standard PET recycling today is essentially ‘downcycling,’” says Gregg Beckham, senior author of a paper on the study. “The process we came up with is a way to ‘upcycle’ PET into long-lifetime, high-value composite materials like those that would be used in car parts, wind turbine blades, surfboards, or snowboards.”
Read any extensive coverage of virtual reality and undoubtedly you’ll come across the word “immersion” at least once or twice. That’s a relatively vague term for the feeling of presence within the virtual world, but it’s as close as we’ll get to providing a singular point of reference for gauging the success of a VR developer in crafting a believable experience.
Most developers utilize similar technique to achieve immersion. They work within the bounds of existing virtual reality headsets and craft small-scale, high-fidelity scenes, with binaural audio. If they do a good enough job, you may find yourself wandering into the uncanny valley. It’s rare to climb the rise on the other side — yet that’s exactly what Awake from virtual reality production house Start VR is attempting to do.
Its roughly 20-minute first episode is a lightly interactive cinematic VR experience that delivers a captivating story around the theme of lucid dreaming. But as intriguing as the narrative is, it’s the volumetric capture technique used to craft it that is Awake‘s most striking element.
The end result is characters with incredibly realistic facial expressions, motions, and gestures. This is no ping-pong-ball motion capture. It’s something else entirely.
SO LONG PING PONG BALLS, HELLO HOLOGRAMS
“The technique that Awake was specifically written around is volumetric video,” explained creator and director Martin Taylor. “So it’s a new type of performance capture that takes the full actor’s performance including all of the wardrobe and the full expression and captures them as a filmic hologram.”
This is very different from the motion capture techniques used to craft most in-game characters for both virtual reality experiences and more traditional 2D games. While those use samples of movements to craft the 3D models that are skinned to bring virtual characters to life, volumetric capture records the entire person. Inside and out, so to speak.
“It’s using a special rig with 106 cameras with depth sensors all pointing towards a cylindrical volume in the center and up to two actors can stand in that volume and deliver their performances,” Taylor told Digital Trends. “All of these cameras and sensors capture that, and lots of processing and software stitches it together into a completely believable and solid digital copy of the performance.”
That gives the developers not only an accurate 3D representation of where those actors are and how they’re posed at any particular time, but a near photo-realistic video impression of them too.
“The raw version of the captures are a huge series of OBJ models which are all uniquely different from one another and then a video texture that’s wrapped over the top,” Taylor said.
Early pioneers in its usage, volumetric capture wasn’t something Start VR could perform alone. It needed Microsoft’s help.
Thanks to things like smartphones and automotive infotainment systems, both pedestrians and drivers are probably now less aware of one another than ever before. An experimental new crosswalk could help keep accidents from happening, however, through a variety of lights, electronic signs, and an app.
Designed by a team at the Korea Institute of Civil Engineering and Building Technology, the system starts by using a thermal imaging camera to detect pedestrians who are approaching the crosswalk. When someone isdetected, the system responds by illuminating LED warning lights that are embedded in the asphalt on either side of the crosswalk. These lights are said to be visible from up to 50 meters away (164 ft), yet are not so bright that they will disrupt drivers’ vision.
Once a vehicle subsequently gets to within 30 m (98 ft) of the crosswalk, a blinking electronic sign illuminates to warn the driver of the pedestrian – just in case that driver missed the embedded LEDs.
Pedestrians, on the other hand, are warned of approaching vehicles in three ways. First, if an oncoming car travelling faster than 10 km/h (6 mph) is detected, a warning image is projected onto the ground in front of the pedestrian – this should catch the attention of people who are looking down at their phones, or the elderly, who are more likely to be looking at the ground as they walk.
Secondly, an audible alarm is sounded. And third, an app causes the pedestrian’s phone to vibrate and sound an alarm of its own.
In field tests that involved about 1,000 vehicles, 83.4 percent of drivers either stopped or reduced their speed in response to the system’s warnings. And on roads with a 50 km/h speed limit (31 mph), drivers approaching the crosswalk reduced their speed by almost 20 percent more than when approaching the same crosswalk without the added warning system.
“We expect outstanding results when the system is installed at crosswalks without traffic signals and crosswalks on rural highways, where the rate of pedestrian accidents is high,” says project leader Dr. Jong Hoon Kim. “We intend to continue to develop the system, so that drivers can be notified of upcoming crossings via their navigation apps, and also vehicles can automatically slow down when dangerous circumstances are detected.”
It is estimated that the system will cost about US$13,300 per crosswalk to install, and that its socioeconomic benefits should far outweigh that expenditure.
The greatest unfair competitive advantage for your small business is leveraging this critical shift in how you view the drivers of the future. As I was growing up I’d often quip that my grandmother, who had been born at the start of the 20th Century in a Greek village and lived to nearly the age …
In 1961, the first minicomputer, called the PDP-1, arrived at the MIT Electrical Engineering Department. It was a revolutionary machine but, as with all things that are truly new and different, no one really knew what to do with it. Lacking any better ideas, a few of the proto-hackers in residence decided to build a …
If there’s one thing that many North American cities need this winter, it’s help in snow removal. That’s likely why Colorado-based Left Hand Robotics is currently sold out of its SnowBot Pro, which autonomously brushes snow off of sidewalks while avoiding pedestrians.
Users start by walking the path that they wish the SnowBot to travel, carrying a Path Collection Tool with them as they do so. This records the path as a series of GPS waypoints, which are uploaded to the cloud-based Robot Operations Center (ROC). There, the GPS data is transformed into a “path program,” utilizing technology developed by Swift Navigation.
When it’s time for the SnowBot to take action, it accesses the ROC and downloads that program. An integrated inertial measurement unit (a combination accelerometer, gyroscope and magnetometer) also helps it to stay properly oriented. And should its two LiDAR sensors and six cameras detect any obstacles in its path, the bot will automatically stop and then notify the ROC, which will send instructions for getting around the obstacle.
The actual snow removal is handled by a spinning front broom, which can be angled to either side. An optional rear attachment can also be used to spread salt or de-icer. Power is provided by a 37-hp Vanguard Model 61E gasoline engine – the SnowBot is definitely a burlier device than products such as the SuperDroid robotic snow plow, or the SnowBYTE.
Users are able to monitor its progress in real time, via the ROC web dashboard or mobile app. They can additionally view before and after photos of each snow-clearing job, which the bot automatically shoots and records.
The SnowBot Pro hit the market this winter (Northern Hemisphere) and as has already been mentioned, the company is now sold out for the season. Should you be interested in getting one for next winter, though, it’s priced at US$35,995.
Following an architecture competition, Turkey’s Melike Altinisik Architects (MAA) has been selected to create the new Robot Science Museum (RSM) in Seoul, South Korea. Interestingly, the firm says that the building will be part-constructed using 3D-printing machines, drones, and other robots.
The RSM was conceived by the Seoul Metropolitan Government to increase public interest in, and promote the knowledge of, robotics. It’s part of a larger development push in the area that will also host a Photographic Art Museum.
As well as robotics, the museum will include science exhibits with Artificial Intelligence, Virtual Reality and Augmented Reality systems, holographic tech, and training courses. The first “exhibit” though will be its own construction.
“The new Robot Science Museum which plays a catalytic role in advancing and promoting science, technology, and innovation throughout society is not only going to exhibit robots but actually from design, manufacturing to construction and services robots will be in charge,” explains MAA principal Melike Alt?n???k. “In other words, RSM will start its ‘first exhibition’ with ‘its own construction’ by robots on site.”
MAA told us that the spherical building’s curved metal facade will be molded, assembled, welded into place and then polished by machines. Additionally, 3D printers will be used to create concrete for the surrounding landscaping. Furthermore, MAA says that drones will be used for building inspections, security surveillance, mapping data, and controlling robotic construction vehicles.
The robotic aspect certainly sounds significant, though there will also likely be a lot of human involvement too. We’ll know more as the project progresses. The Robot Science Museum is due to be completed in 2022.
Robots are teaching themselves to handle the physical world.
For all the talk about machines taking jobs, industrial robots are still clumsy and inflexible. A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored—but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.
But while a robot can’t yet be programmed to figure out how to grasp any object just by looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and error.
One such project is Dactyl, a robot that taught itself to flip a toy building block in its fingers. Dactyl, which comes from the San Francisco nonprofit OpenAI, consists of an off-the-shelf robot hand surrounded by an array of lights and cameras. Using what’s known as reinforcement learning, neural-network software learns how to grasp and turn the block within a simulated environment before the hand tries it out for real. The software experiments, randomly at first, strengthening connections within the network over time as it gets closer to its goal.
It usually isn’t possible to transfer that type of virtual practice to the real world, because things like friction or the varied properties of different materials are so difficult to simulate. The OpenAI team got around this by adding randomness to the virtual training, giving the robot a proxy for the messiness of reality.
New-wave nuclear power
Advanced fusion and fission reactors are edging closer to reality.
New nuclear designs that have gained momentum in the past year are promising to make this power source safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach. Developers of generation IV fission designs, such as Canada’s Terrestrial Energy and Washington-based TerraPower, have entered into R&D partnerships with utilities, aiming for grid supply (somewhat optimistically, maybe) by the 2020s.
Small modular reactors typically produce in the tens of megawatts of power (for comparison, a traditional nuclear reactor produces around 1,000 MW). Companies like Oregon’s NuScale say the miniaturized reactors can save money and reduce environmental and financial risks.
A simple blood test can predict if a pregnant woman is at risk of giving birth prematurely.
Our genetic material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood, often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic acids from the fetus, the placenta, and the mother.
Stephen Quake, a bioengineer at Stanford, has found a way to use that to tackle one of medicine’s most intractable problems: the roughly one in 10 babies born prematurely.
Free-floating DNA and RNA can yield information that previously required invasive ways of grabbing cells, such as taking a biopsy of a tumor or puncturing a pregnant woman’s belly to perform an amniocentesis. What’s changed is that it’s now easier to detect and sequence the small amounts of cell-free genetic material in the blood. In the last few years researchers have begun developing blood tests for cancer (by spotting the telltale DNA from tumor cells) and for prenatal screening of conditions like Down syndrome.
The tests for these conditions rely on looking for genetic mutations in the DNA. RNA, on the other hand, is the molecule that regulates gene expression—how much of a protein is produced from a gene. By sequencing the free-floating RNA in the mother’s blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take measures to stave off an early birth and give the child a better chance of survival.
Gut probe in a pill
A small, swallowable device captures detailed images of the gut without anesthesia, even in infants and children.
Environmental enteric dysfunction (EED) may be one of the costliest diseases you’ve never heard of. Marked by inflamed intestines that are leaky and absorb nutrients poorly, it’s widespread in poor countries and is one reason why many people there are malnourished, have developmental delays, and never reach a normal height. No one knows exactly what causes EED and how it could be prevented or treated.
Practical screening to detect it would help medical workers know when to intervene and how. Therapies are already available for infants, but diagnosing and studying illnesses in the guts of such young children often requires anesthetizing them and inserting a tube called an endoscope down the throat. It’s expensive, uncomfortable, and not practical in areas of the world where EED is prevalent.
So Guillermo Tearney, a pathologist and engineer at Massachusetts General Hospital (MGH) in Boston, is developing small devices that can be used to inspect the gut for signs of EED and even obtain tissue biopsies. Unlike endoscopes, they are simple to use at a primary care visit.
Tearney’s swallowable capsules contain miniature microscopes. They’re attached to a flexible string-like tether that provides power and light while sending images to a briefcase-like console with a monitor. This lets the health-care worker pause the capsule at points of interest and pull it out when finished, allowing it to be sterilized and reused. (Though it sounds gag-inducing, Tearney’s team has developed a technique that they say doesn’t cause discomfort.) It can also carry technologies that image the entire surface of the digestive tract at the resolution of a single cell or capture three-dimensional cross sections a couple of millimeters deep.
The technology has several applications; at MGH it’s being used to screen for Barrett’s esophagus, a precursor of esophageal cancer. For EED, Tearney’s team has developed an even smaller version for use in infants who can’t swallow a pill. It’s been tested on adolescents in Pakistan, where EED is prevalent, and infant testing is planned for 2019.
Custom cancer vaccines
The treatment incites the body’s natural defenses to destroy only cancer cells by identifying mutations unique to each tumor
Scientists are on the cusp of commercializing the first personalized cancer vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumor by its unique mutations, could effectively shut down many types of cancers.
By using the body’s natural defenses to selectively destroy only tumor cells, the vaccine, unlike conventional chemotherapies, limits damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer cells after the initial treatment.
The possibility of such vaccines began to take shape in 2008, five years after the Human Genome Project was completed, when geneticists published the first sequence of a cancerous tumor cell.
Soon after, investigators began to compare the DNA of tumor cells with that of healthy cells—and other tumor cells. These studies confirmed that all cancer cells contain hundreds if not thousands of specific mutations, most of which are unique to each tumor.
A few years later, a German startup called BioNTech provided compelling evidence that a vaccine containing copies of these mutations could catalyze the body’s immune system to produce T cells primed to seek out, attack, and destroy all cancer cells harboring them.
In December 2017, BioNTech began a large test of the vaccine in cancer patients, in collaboration with the biotech giant Genentech. The ongoing trial is targeting at least 10 solid cancers and aims to enroll upwards of 560 patients at sites around the globe.
The cow-free burger
Both lab-grown and plant-based alternatives approximate the taste and nutritional value of real meat without the environmental devastation.
The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.
By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.
Depending on the animal, producing a pound of meat protein with Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.
The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.
Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working on the taste. Researchers at Maastricht University in the Netherlands, who are working to produce lab-grown meat at scale, believe they’ll have a lab-grown burger available by next year. One drawback of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.
Carbon dioxide catcher
Practical and affordable ways to capture carbon dioxide from the air can soak up excess greenhouse-gas emissions.
Even if we slow carbon dioxide emissions, the warming effect of the greenhouse gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the atmosphere this century.
In a surprise finding last summer, Harvard climate scientist David Keith calculated that machines could, in theory, pull this off for less than $100 a ton, through an approach known as direct air capture. That’s an order of magnitude cheaper than earlier estimates that led many scientists to dismiss the technology as far too expensive—though it will still take years for costs to fall to anywhere near that level.
But once you capture the carbon, you still need to figure out what to do with it.
Carbon Engineering, the Canadian startup Keith cofounded in 2009, plans to expand its pilot plant to ramp up production of its synthetic fuels, using the captured carbon dioxide as a key ingredient. (Bill Gates is an investor in Carbon Engineering.)
Zurich-based Climeworks’s direct air capture plant in Italy will produce methane from captured carbon dioxide and hydrogen, while a second plant in Switzerland will sell carbon dioxide to the soft-drinks industry. So will Global Thermostat of New York, which finished constructing its first commercial plant in Alabama last year.
An ECG on your wrist
Regulatory approval and technological advances are making it easier for people to continuously monitor their hearts with wearable devices.
Fitness trackers aren’t serious medical devices. An intense workout or loose band can mess with the sensors that read your pulse. But an electrocardiogram—the kind doctors use to diagnose abnormalities before they cause a stroke or heart attack— requires a visit to a clinic, and people often fail to take the test in time.
ECG-enabled smart watches, made possible by new regulations and innovations in hardware and software, offer the convenience of a wearable device with something closer to the precision of a medical one.
An Apple Watch–compatible band from Silicon Valley startup AliveCor that can detect atrial fibrillation, a frequent cause of blood clots and stroke, received clearance from the FDA in 2017. Last year, Apple released its own FDA-cleared ECG feature, embedded in the watch itself.
Sanitation without sewers
Energy-efficient toilets can operate without a sewer system and treat waste on the spot.
About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.
Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well.
In 2011 Bill Gates created what was essentially the X Prize in this area—the Reinvent the Toilet Challenge. Since the contest’s launch, several teams have put prototypes in the field. All process the waste locally, so there’s no need for large amounts of water to carry it to a distant treatment plant.
Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil.
Smooth-talking AI assistants
New techniques that capture semantic relationships between words are making machines better at understanding natural language.
We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.
But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.
These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.
Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.
In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.
As any scuba diver will know, communicating while underwater can be difficult. Although it’s possible to use hand signals, you still have to get other divers’ attention so that they see those signals in the first place. The Oceans S1 Supersonic dive computer is made to address that problem, using an ultrasonic comms system.
Made by Swedish startup Team Oceans, the wrist-worn S1 provides all the usual dive computer data – things like digital compass heading, elapsed time, current/maximum depth, water temperature, required surface interval before diving again, and so on. Information is displayed on a retina-class 2.2-inch color LED screen, and can be transferred via Bluetooth to an iOS/Android dive log app on the user’s smartphone once they’re out of the water.
Additionally, though, if the user wants to get the attention of one or more other S1-using divers, they just tap a button on the device. This sends an ultrasound signal through the water (radio waves don’t travel well underwater), which will be received by any paired S1s within a range of over 15 meters (45 ft).
Users of those devices will be alerted to the incoming “ping” via a haptic feedback system, that causes the computer to buzz their wrist. When they check the screen, a text message will tell them which diver sent it.
The Oceans S1 Supersonic weighs 95 grams (3.4 oz), works up to a maximum recommended depth of 50 m (150 ft), can store 500 dives or 80 hours worth of data, and should run for about 10 hours on one wireless charge of its battery.
It’s currently the subject of a Kickstarter campaign, where a pledge of SEK 2,999 (about US$324) will get you one – when and if it reaches production, that is. The planned retail price is €499 ($566). A package of two can be had for SEK 6,999 ($756) or €998 retail ($1,133).
As a side note, the existing Liquivision Lynx dive computer is already able to monitor the air supply of up to 10 divers, using tank-mounted ultrasound transmitters.
Every technology goes through a Development Phase before entering its Application Phase. The tech-advancements below are all entering the Application Phase. They are ready to be used as innovation tools — and to be used to solve your company’s and your customers real world challenges. Innovators in large multinational organizations, as well as small and mid-sized …
If the threat of AI and Robots frightens you then you’re probably leaving out some very important math. If you’ve been following the work of Boston Dynamics (currently owned by Softbank) you’ve probably seen some of their four legged and wheeled robots which are able to navigate all sorts of obstacles and remain standing after …
In 1970, a scientist at IBM Research named Edgar F. Codd make a remarkable discovery that would truly change the world. Though few realized it at the time, including at IBM, which neglected to commercialize it. It was called the relational model for the database and it would spawn an entire industry. Yet while today few have heard of …
It seems that the more technology progresses, the easier it becomes to produce convincing counterfeit goods. Scientists at the University of Copenhagen are fighting back, however, with product tags that they claim cannot be replicated – even by an item’s legitimate manufacturer.
Known as the physical unclonable functions (PUFs) system, the technology was developed by researchers Riikka Arppe-Tabbara, Mohammad Tabbara and Thomas Just Sørensen.
They created 9,720 of the tags by laser-printing QR codes onto regular paper, then spraying a translucent microparticle-containing ink over top of each one. While the codes were still readable, the particles also showed up as patterns of tiny white spots on a black background (when magnified). Because of the random scattering nature of the spraying method, those patterns were unique to each tag.
The scientists next photographed every one of the tags with a smartphone camera, in order to create a digital registry in which each microparticle pattern was linked to its corresponding tag.
When the tags were subsequently photographed by different smartphones, and the images of their particle patterns were compared to those in the registry, it was possible to tell which pattern belonged to which tag with an accuracy rate of 76 percent – in some cases another photo had to be taken with the “reading” phone, as the original was out of focus, or the tag was dirty. In no cases did the system incorrectly match a pattern to a tag.
It is hoped that once the technology is refined, the tags could be manufactured utilizing commercial printing and coating processes. They could subsequently be affixed to items such as luxury goods or medications before shipping, then used by stores or customers to authenticate that those items are the genuine article.
Scientists a Hokkaido University have found a way to create materials that actually get stronger the more you use them. By mimicking the mechanism that allows living muscles to grow and strengthen after exercise, the team led by Jian Ping Gong developed a polymer that breaks down under mechanical stress, then regrows itself into a stronger configuration by feeding off a nutrient bath.
One of the drawbacks of non-living materials is that they have a very finite service life compared to living, organic materials. Materials like steel, plastic, ceramics, and textiles wear out with use at a surprisingly fast rate compared to comparable living things. Metals undergo fatigue, plastics crumble, ceramics crack, and textiles have a sadly short life compared to the skin they cover.
The reason for this is that living tissue can not only regrow itself, it can become stronger the more it’s used. That’s why a human heart can pump at a rate of about 72 beats per minute, 24 hours a day, 365 days a year, for over a century. It’s also why exercise can make skeletal muscles stronger. A workout in the gym that makes a human healthier would just be so much wear and tear to a machine.
Put simply, muscles become stronger because during exercise the fibers are broken down and then replaced by new, stronger fibers that feed off the amino acids supplied by the bloodstream to form proteins. In recent years, scientists and engineers have come up with new, self-healing materials that can repair themselves when damaged or worn out, but what about a material that, like muscle tissue, can become stronger with use?
To achieve this, the Hokkaido team used what is called double-network hydrogels. Like other hydrogels, these are polymers that are 85 percent water by weight, but in this case, the material consist of both a rigid, brittle polymer and a soft, stretchable one. In this way, the finished product is both soft and tough.
However, the clever bit is that under laboratory conditions the hydrogel was immersed in a bath of monomers, which are the individual molecular links that make up a polymer. These serve the same function in the muscle-mimicking material as amino acids do in living tissue.
According to the team, when the hydrogel is stretched, some of the brittle polymer chains break, creating a chemical species called “mechanoradicals” at the end of the broken polymer chains. These are very reactive and quickly join up with the floating monomers to form a new, stronger polymer chain.
Under testing, the hydrogel acted much like muscles under strength training. It became 1.5 times stronger, 23 times stiffer, and increased in weight by 86 percent. It was even possible to control the properties of the material by using heat-sensitive monomers and applying high temperatures to make it more water resistant.
Gong says this approach could lead to materials suitable for a variety of applications, such as in flexible exosuits for patients with skeletal injuries that become stronger with use.
The research was published in Science and the video below discusses the new hydrogel material.
Few things will surprise and shock us as much as the ability of A.I. to alter reality–really! What separates the “real” from the “virtual” is obvious, right? Think again. The applications of AI are infinitely greater in number and in impact than anything we could possibly begin to imagine, and they are accelerating at a …
Still Trying To Demystify AI? Here’s A Simple Analogy That Even A 10-Year-Old Can Follow. The buzz about AI is everywhere, but most of us still think of AI as a black box. It isn’t. Look Ma…No Hands! In the first column of this two-part series I talked about how leading-edge AI, such as DeepMind’s …
German businessman Radoslav Albrecht has founded an online bank that will use Bitcoin to facilitate international money transfers. Albrecht says that Bitbond’s use of cryptocurrency will allow it to transfer money quicker and at a lower cost than standard banks.
“Traditional money transfers are relatively costly due to currency exchange fees, and can take up to a few days,” he told Reuters. “With Bitbond, payments work independently of where customers are. Via internet it is very, very quick and the fees are low.”
Clients who use Bitbond only hold onto the cryptocurrencies for a few minutes before they are exchanged for local currency. This ensures that clients won’t have to deal with the rather volatile cryptocurrency market.
Bitbond was founded in 2013 and has grown into an officially licensed bank with a large number of investors. Currently, Bitbond’s Berlin office employs 24 people from a dozen countries managing loans for about 100 clients. In total, the loans amount to about $1 million each month.
Albrecht says that the majority of his customers are small business owners or freelance workers who need a quick and affordable way to handle international loans. The loans are fairly small by the standards of most banks, and tend to be around $50,000 at the higher end of the scale.
While Bitcoin has been used as collateral for loans in the past, this is the first time that it has been used to facilitate international loans. It makes sense that Germany would be one of the first countries to attempt it, however. Adoption of Bitcoin has proceeded at a rapid rate within Germany. In terms of Bitcoin usage, it is second only to the United States.
While Bitcoin and other cryptocurrencies are just now beginning to hit the mainstream, they still pose many risks not found in traditonal currencies. The largest is the simple fact that the various currencies tend to sharply fluctuate in value. This can make it risky as a long-term investment, but since Bitbond clients only hold the cryptocurrencies for a short amount of time, that risk is mitigated to some extent.
For more information about Bitcoin and the technology that makes it possible, check out our blockchain primer.
Approaches to dealing with communication impairments, whether they are caused by disability or otherwise, have long-since been outdated.
The limitations of using a flashcard-based approach often mean that some of the most fundamental aspects of improving communication can easily be missed. Whilst useful as an educational tool, the flashcard method is unable to offer as much of a sensation of reality as an interactive app.
For those with learning difficulties, conversation can be a tricky skill to practise, even through face-to-face interaction. However, it’s a challenge that can easily be tackled with the help of cutting edge AI technology.
“Conversation coach” app Speakprose, designed by AI engineers at Cognixion, aims to empower people who experience communication difficulties, improving their lives and engagement with the world.
By swapping a human conversation partner for a chatbot, Speakprose simulates interactions in the most realistic way possible, allowing its users to practise their conversation techniques as much as they like. The app – which is available to download for iOS via the Apple store – focuses not just on vocabulary, but also gestures and conversation structure.
Its easy-to-use interface is engaging for users, and can either talk on their behalf or help them to improve their own speech. The use of emojis allows users to create their own profiles and identify other conversation partners, whilst making the app a relatable and stimulating visual tool.
Thanks to the app’s rewards scheme, users are able to feel a sense of achievement as they make their way through the different stages. It also encourages personalised learning, making progress easier, faster and – most importantly – monitored. More challenging stages can be purchased by upgrading to the app’s ‘pro’ version.
When it comes to disability, dependence on others often has the potential to become overwhelming. With this in mind, Speakprose aims to help their users achieve simple goals, such as independence and the ability to express their true selves, often taken for granted but an invaluable part of everyday life.
43 Percent of Boomer Tech Employees Are Worried About Losing Their Jobs. Here’s Why They Shouldn’t Be The workforce is changing in ways that are much more complex than the generational stereotypes we’ve bought into. There’s nothing new about the fear of diminishing employment options as we age. But what is new is the antidote. …