Career Presentation Series

IIT Stuart School of Business, Downtown Campus

It’s no secret the advances in hi-tech is driving demand for certain business careers whether it be analytics, softdev, appdev….and has been  impacting new verticals in finserv, healhcare, retail, logistics, etc.   The impact of Product Management frameworks in Organization Design and product launch process have been critical in pushing the “Product Development” envelope and driving Innovation catalysts. I want to thank Shahzad Hussain, Senior Associate Director at IIT Stuart School of Business, for extending an invitation to address these important business and technology trends impacting today and tomorrow’s advanced career opportunities.

“It was a privilege to speak and present to Stuart graduate students on subject matter that will make an impact on their business careers”….A.Alvarado, IIT Alumnus

According to Google’s support site for Duplex, Android phones running Android 5.0 Lollipop and above will have access to Google Duplex via the Google Assistant. For iOS devices, you need to have the Google Assistant app installed on your phone, but the process functions in much the same way. There’s no word on when international markets will begin receiving Duplex.

WHAT EXACTLY IS GOOGLE DUPLEX?

“A long-standing goal of human-computer interaction has been to enable people to have a natural conversation with computers, as they would with each other,” wrote Google Principal Engineer Yaniv Leviathan and Vice President of Engineering Yossi Matias, in a May 2018 blog post announcing the technology.

For years, businesses have been trying to create a way for people to have conversations with computers. Almost every time we call a business, we encounter an automated phone system. We have virtual assistants on our phones and virtual assistant-powered speakers in our homes. But although these computer systems can be helpful, they have their shortcomings.

In a blog post, Google notes that one of the biggest problems with these systems is that the user has to adjust to the system, instead of the system adjusting to the user. Think about all of the times you have to repeat yourself when you’re on the phone with an automated system, or all of the times that a virtual assistant hears something different than what you actually said.

Google Duplex helps with these problems by allowing the computer to have a natural conversation with a human. The A.I. system adjusts to the person, instead of the person adjusting to the system. Therefore, the person can speak normally, just as they would if they were speaking to another person. Google Duplex also makes it so the computer system sounds like a human. It uses a natural tone, as well as words and phrases like “um” and “uh,” just like a person would. During a conversation, the A.I. system can also handle interruptions and elaborate.

At the center of Google Duplex is a recurrent neural network that was built using a machine learning platform called TensorFlow Extend (TFX). When the system makes a phone call, it is pretty much indistinguishable from a live human being. You can hear Google Duplex scheduling an appointment and holding a phone conversation below.

WHAT CAN GOOGLE DUPLEX DO FOR YOU?

The main thing Google Duplex will be able to do for you is handle some of your busy work. It can make calls on your behalf, schedule appointments, or call to check the hours of operation at a business, for instance. Now, it can’t make that uncomfortable break-up call for you, but it can reserve you a table at a participating restaurant or call and make you an appointment at a hair salon. For instance, if you tell Google Assistant you want to go to a specific restaurant at 7 p.m. on Friday, the system will call and make a reservation for you and then notify you when it’s confirmed.

A FEW POTENTIAL APPLICATIONS FOR GOOGLE DUPLEX

Imagine calling the cable company and dealing with an automated system that sounds and operates exactly like a human — one that can actually help you. There would be no more annoying IVR systems that tell you to “press 1 for billing questions or press 2 for technical issues.” Imagine if the IRS had this A.I. technology. During tax season, you wouldn’t have to wait an hour on hold for a representative because you could ask the A.I. system your tax-related questions.

Businesses such as doctors and lawyers who regularly schedule clients could have the A.I. do that on their behalf. Small businesses can also benefit, as research shows that 60 percent of small businesses that rely on customer bookings don’t have an online booking system, according to Google’s blog.

CONCERNS ABOUT GOOGLE DUPLEX

Many people have expressed concerns about Google Duplex. Aside from the fact that it’s a bit creepy, some people are worried about privacy and security. Is it secure to have a computer calling businesses and speaking to live people on your behalf? Is it secure for the person on the other end of the line? Others have concerns about the potential impacts on advertising, and some people even worry about how quickly the A.I. is evolving. Google Assistant just came out a couple of years ago, and now it already sounds like an actual human on the phone.

Google addressed a couple of these issues during our recent demo with the service. At the beginning of the call, Google Assistant identifies itself and also notes that it’s recording the call. That might make a restaurant owner take pause on having the conversation until the service become more widely used.

What is Google Duplex? The smartest chatbot ever, explained [Digital Trends]

Somewhere in a laboratory, something is squirming. It’s an eel robot, swimming silently through an enormous darkened saltwater tank; its rhythmic, ribbon-like motions echoing those of its natural counterpart. Or is it an octopus-inspired robot, looking like the kind of thing H.P. Lovecraft might have dreamed up had he been born 100 years later and become a roboticist instead of a horror writer?

Maybe it’s neither of those. Maybe it’s a soft rubbery sleeve, wrapped around a human heart, and giving it regular reassuring squeezes to allow it to continue beating in the face of heart failure. Or a hydrogel gripper which can reach out and grab a swimming fish, securing it without hurting it. Or. Or. Or.

The reality is that there is no shortage of examples from the burgeoning field of soft robotics. One of the most exciting, fast-developing fields in robotics research, these robots don’t resemble the hard, metallic machines that science fiction promised us. They’re made of rubbery materials. They’re also much cheaper to build, weigh considerably less, and offer far more flexibility and (perhaps counterintuitively, given their soft materials) durability. In the process, they’re changing what we think of as a robot — and proving themselves immensely useful in the process.

THE ROBOT OF A THOUSAND USES

The current surge of soft robots into a world previously dominated by metal doesn’t take anything away from the more traditional robots being built by companies like Boston Dynamics. Advances in traditional hard robotics have, in the past few years alone, given us all manner of versatile machines capable of doing everything from fine assembly work on production lines to performing dances, parkour displays or even Olympics-worthy backflips. But these traditional robots aren’t good for everything.

They’re great at carrying out the same task many times in a row, which is exactly what’s required if, for example, they’re helping to put together iPhones on a conveyor belt in a Foxconn factory. But remove them from the structured domain that they’re used to working in and suddenly their astonishing precision can disappear in an instant.

This is problematic for all sorts of reasons, not least the fact that, increasingly, robots are going to be working alongside people and other living people. This could mean directly working with humans as colleagues. It could also mean an even closer level of interaction, such as the aforementioned robot intended to keep a person’s heart beating in the face of possible cardiovascular failure. It’s these scenarios which have led, in part, to the increasing popularity of soft robots.

“[One other] exciting application field for soft robotics is in the search and rescue sector,” Giada Gerboni, a postdoctoral fellow at Stanford University, who last year gave a TED Talk titled “The incredible potential of flexible, soft robots,” told Digital Trends. “[In the lab I’m working in], there is a very motivated team of colleagues that is exploring how a soft robot that grows from the tip — like a plant, or more specifically a vine — would be able to navigate archeological sites inaccessible by humans and full of very delicate artifacts, or to explore the delicate underground habitats of some endangered animal species.”

Gerboni’s own research has medical applications. She is presently working on flexible robots which could be used as surgical instruments to access hard-to-reach parts of the body, just as her colleagues’ soft robots could help access remote sites.

“My current work in Stanford uses a flexible needle that can reach completely different parts of the liver from a single insertion point and can burn a tumor — [destroying] tumor cells by heat — with its tip,” she continued. A steerable needle isn’t what we might think of as a robot, but it’s one of the possibilities opened up by soft robots.

GOODBYE METAL?

The latest development in soft robotics is particularly exciting. Recently, researchers figured out how to eliminate the last hard, metal components in soft robots. Where previous soft robots have still required components like metal valves, this latest soft robot can function using only rubber and air — with pressurized air replacing the need for electronic innards. In doing so, it integrates memory and decision-making directly into its soft materials, using a kind of digital logic-based soft computer.

“These states of high pressure and low pressure are analogous to a digital signal, with 1 equal to high pressure, and 0 equal to low pressure,” Philipp Rothemund, one of the researchers on the project, explained to us. “Typically, the control of soft robots is done with electronic controllers and using hard valves resulting in hard/soft hybrid robots. Our soft computer allows integration of complex control directly into the structure of a soft robot.”

All of this is still at a relatively early stage, but it demonstrates the fast-moving world of soft robotics: lagging a half century behind its harder counterpart, but hastily doing its best to catch up. It seems to be working, too. In addition to the applications already mentioned, soft robot grippers are now being used on assembly lines, due to their ability to interact with objects without the potential of damaging them.

As Giada Gerboni notes, however, it’s a mistake to view soft robotics as being in conflict with traditional hard robots.

“I would not say that soft robots are better, but it is only a class of robots — or a way to do robotics — that we cannot avoid considering anymore,” she said. “Soft robots have already demonstrated to have great potential in navigation tasks because they can articulate their body easily, and their navigation is not compromised by abruptly contacts with unknown objects. But when it comes to another very common task for robots like manipulation tasks, then having non-rigid parts is very bad because the robot can [exert very little force, meaning that it cannot lift many objects.] Also, given the high body flexibility, degrees of freedom, and limited sensor integration, they cannot be precisely controlled so they cannot perform a very precise manipulation task.”

BOTH HARD AND SOFT

Ultimately, she says, the best way to truly make robots that will be useful in our lives will be to combine both hard and soft: just as we see with the materials found in nature. “Just look at humans,” she said. “They are perfect robotic machines with soft components, but also stiff structures and standard joints. [They] can carry out tasks in different contexts, but can also be precise and exert forces.”

Some researchers are even busy exploring materials which allow soft robots to stiffen on demand, or else combine different rigidity materials for achieving both precision and dexterity. The term “soft robot” has barely been around for a decade — but it’s already carving out a niche for itself. (And finding its way into pop culture, too: check out Baymax, the lovable robot from Disney’s Big Hero 6.)

As we move toward a world in which robots are part of our everyday life, soft robots are going to take on more importance than ever. One thing’s for sure: here in 2019, the word “robot” is no longer synonymous with “metal machine.”

Goodbye metal! How soft robots are changing what we think of as a robot [Digital Trends]

South Australia has recently put the world’s biggest lithium battery into operation – but perhaps it should’ve waited. A local startup says it’s built the world’s first working thermal battery, a device with a lifetime of at least 20 years that can store six times more energy than lithium-ion batteries per volume, for 60-80 percent of the price.

Climate Change Technologies, also known as CCT Energy Storage, has launched its TED (Thermal Energy Device) with a set of remarkable claims. TED is a modular energy storage unit that accepts any kind of electricity – solar, wind, fossil fuel-generated or straight off the grid – and uses it to heat up and melt silicon in a heavily insulated chamber. Whenever that energy is required, it’s pulled out with a heat engine. A standard TED box holds 1.2 megawatt-hours of energy, with all input and output electronics on board, and fits easily into a 20-ft (6-m) container.

Here are some of CCT’s banner claims about the TED: For a given size volume, it can store more than 12 times more energy than a lead-acid battery, and several times more than lithium-ion solutions. Installations can scale from 5-kilowatt applications out to a virtually unlimited size. Hundreds of megawatts of instantly accessible, easily controllable power should be no problem – all you need to do is add more units, plug-and-play style. In the case of an outage, each TED device can remain active for about 48 hours.

It can also charge and discharge at the same time, and there are only three moving parts per box, so maintenance is almost negligible. Where lithium-ion and other batteries degrade over time, perhaps dropping to 80 percent capacity in some 5,000 cycles or so, the TED system has shown no signs of degrading after 3,000 cycles of service on the test bench, and CCT’s CEO Serge Bondarenko tells us over the phone that the company expects its units to last at least 20 years.

“Molten silicon just doesn’t degrade like lithium does,” says Bondarenko. “That’s a chemical process, ours is simply phase-change with heat. In fact, it appears silicon even gets better at storing heat after each cycle. And if you do need to decommission a TED device, it’s 100 percent recyclable. It simply doesn’t create the environmental problems that lithium does.”

Importantly for any large scale usage, it’s cost competitive – Bondarenko projects it’ll cost some 60-80 percent of the price you’d pay for an equivalent lithium-ion solution like Tesla’s Powerpacks, while taking up a smaller footprint on the ground. TED can easily be adapted for earthquake-prone environments by installing it on a quake-proof platform, but in the event of a serious issue, Bondarenko tells us “we just turn it off, and it cools down until it’s ready to go again. It’s very safe.” Mind you, since the melting point of silicon is more than 1,400° C (2,550° F), it’s not something you’ll want dribbling out on the ground.

CCT has signed an initial deal to provide TED devices to Stillmark Telecommunications, as well as a reciprocal manufacturing agreement with MIBA group, which will have exclusive rights to manufacture and sell the technology through Denmark, Sweden and the Netherlands, with negotiations ongoing about adding other European countries to that list. Manufacture is set to begin this quarter, and Bondarenko says once the devices have been proven commercially, the company plans to ramp up rapidly and be ready to build 100-megawatt-plus installations within a couple of years.

Obviously, this seems like great news for the renewable energy sector. Wind, solar, tidal and other renewable energy technologies can be very effective at generating a lot of power, but only when it’s available rather than on demand. Grid-level energy storage solutions could store energy up during the solar peak of the midday heat, then return that power to the grid during peak load times in the evening when the sun’s not shining, making renewables a truly 24-hour energy source.

Could the system’s huge energy density also scale downward to power electric vehicles? “No,” says Bondarenko, “it’s too big. The container, the insulation, the heat engine, it needs to be a certain size to realize its density benefits. But we can certainly charge electric vehicles, and we have been in discussions with some manufacturers of large electric ferry boats that could charge the battery up at the dock and use it to power their ferries.”

If things pan out the way CCT believes they should, this cheap, high-density thermal battery, powered by abundant elements and totally recyclable, could be a key technology in helping move the world toward a clean energy future.

“World’s first working thermal battery” promises cheap, eco-friendly, grid-scalable energy storage [New Atlas]

Google Lens, first introduced at Google I/O 2017, is one of the most exciting Android features for years. Originally an exclusive feature only found on Pixel smartphones, Google Lens is now baked into many Android handsets, and is available as an app in the Play Store.

Google Lens combines the power of A.I. with deep machine learning to provide users with information about many things they interact with in daily life. Instead of simply identifying what an object is, Google Lens can understand the context of the subject. So if you take a picture of a flower, Google Lens will not just identify the flower, but provide you with other helpful information, like where there are florists in your area. It also does useful things like scanning QR codes, copying written text, and even live translation of other languages.

HOW TO ACCESS GOOGLE LENS

While there are a few ways to access Google Lens, the easiest is simply long tapping on the home button to open Google Assistant. Once it’s open, tap the compass icon in the bottom-right to open your Explore menu. Then tap the Google Lens (camera-shaped) icon to the left of the microphone to open a viewfinder screen. Point the camera at the item you are interested in, and tap on it. If you don’t see the Google Lens icon, make sure you have the app downloaded from the Play Store.

Some phones allow you to access the feature directly from the camera app. For example, on the LG G8 ThinQ, just open the camera app and double tap A.I. Cam to quickly get to the Google Lens screen.

Once Google Lens identifies an item, you can continue to interact with Assistant to learn more. If you point it at a book, for example, you’ll be presented with options to read a New York Times review, purchase the book on the Google Play Store, or use one of the recommended subject bubbles that appear below the image.

If Google Lens accidentally focuses on the incorrect item, you can tap the back button and give it another try.If it’s too dark, you can tap the light icon in the top left to switch on your device’s flash. You can even use Google Lens on pictures you’ve already taken by tapping the gallery icon in the top right.

Google Lens isn’t perfect. The company admits the technology works best for identifying books, landmarks, movie posters, album art, and more. Still, we were impressed when it offered up reviews, social media accounts, and business information when we pointed it at the awning for a small store. Point it at a business card and it will let you save the person as a contact, and fill in all the details on the card for you.

While Google Lens is still in its infancy, it shows a lot of promise. Its deep learning capabilities mean we should only expect it to get better in the future. Google Lens is currently available on most Android smartphones that support the Google Assistant, and you can expect it to be incrementally upgraded with new features as Google adds to its suite.

But how about supportive tech that is designed to be utilized in everyday life, such as helping elderly people to walk when they have age-related impaired lower-leg muscle strength? In these scenarios, customers may well seek a lightweight, low-profile alternative; preferably one that can be worn under everyday clothing. That’s what mechanical engineering researchers from Vanderbilt University have created with a new ankle exoskeleton developed to help people to walk without fatiguing — and, crucially, without restricting natural motion or drawing attention to itself.

“In this project, we created a spring-powered exosuit — an unmotorized, soft exoskeleton — that can reduce loading on a person’s calf muscles as they ambulate,” Professor Karl Zelik, who worked on the project, told Digital Trends. “The device uses a novel under-the-foot clutch mechanism that we invented, and an extension spring that acts in parallel with the user’s calf muscles. As a person walks, some of the force that typically goes through their muscles is redirected and goes through the assistive spring instead. This reduces the muscle force and effort needed to walk.”

Zelik notes that the low cost (it can be produced for under $100) device weighs only one pound, is quiet, and contains no motors, batteries, or other components that protrude out from the body. To his knowledge, this is the first ankle exoskeleton that can be fully concealed under everyday clothing.

“We performed a series of characterization tests on the device itself to show it works, and then we tested it in human subject experiments in our instrumented motion analysis lab,” he continued. “In these laboratory experiments, we demonstrated that our prototype can reduce loading on the calf muscles, and can assist biological ankle function across a wide range of speeds.”

In addition to older people or those with disabilities, Zelik suggested that there are other potential audiences for the technology. These include runners or hikers, as well as those whose jobs involve large walking distances, such as postal workers or soldiers. The team is currently working toward possible commercialization of this and other types of “mechanized clothing.”

This sleek new exoskeleton makes walking easier, fits under your clothes [Digital Trends]

Folding smartphones may not be available yet — but folding cameras are. On Wednesday, March 13, Insta360 unveiled the EVO, a 180-degree 3D camera and 360 camera that allows users to switch between the two modes by folding the device in half. The company also launched the HoloFrame today, March 13, a frame that pops on the front of a smartphone, allows for glasses-free 3D content viewing.

With the Insta360 EVO fully unfolded, the camera records 5.7K 180-degree 3D video and 18-megapixel 180-degree 3D photos, with the two lenses placed side-by-side. But, fold the camera in half so that the lenses are back-to-back, and you lose that 3D capability but gain a full 360-degree view. The same sensors and lenses are capturing both videos, so the resolution remains unchanged switching to 360 degrees, though those pixels are spread across twice the space.

The Insta360 EVO also uses FlowState stabilization, a type of 6-axis gyroscopic stabilization. The camera is equipped with two f/2.2 lenses. Besides the usual formats like JPEG and MP4, the camera can also shoot DNG RAW files and LOG video.

The EVO builds in time-lapse video modes, as well as an HDR mode to capture high contrast scenes. For stills, the camera has both auto and manual modes.

With an app, the EVO can send JPEGs and MP4s wirelessly to a smart device. The app includes editing and reframing tools before publishing to platforms, including YouTube and Facebook.

The company says that footage from the camera is also compatible with any VR headset (with HTC Vive Focus support coming later this month). But for users who don’t want to mess with the headset, the company is also launching the Insta360 HoloFrame today.  The frame sits over the front of a smartphone like a backward phone case. The case itself allows for 3D viewing without the glasses, while the EVO app can even track eye movement for more realistic results.

The folding camera will be available beginning today at insta360.com, B&H Photo Video, and other retailers, listing for about $420. The HoloFrame also launches today for about $30, with sizes for the iPhone X, XS, XS Max and XR. The company is also planning to launch HoloFrames for the Samsung Galaxy S8, S8+, S9, S9+ and Note 8 smartphones.

Forget folding phones, the Insta360 EVO camera folds in half to shoot 360 video [Digital Trends]

A large defense contractor is in the process of revamping their technology strategy. They want to see more return on investment, which totals more than $1 billion annually across headquarters and its business units. This money is meant to deliver the next generation of technologies their customers need. Recently, however, their success rate has been …

Although joint replacement surgery typically goes off without a hitch, potentially-dangerous infections can occur after the operation. Scientists from New Jersey’s Stevens Institute of Technology are addressing that problem, with a new type of highly-targeted bacteria-killing gel.

Developed by a team led by Prof. Matthew Libera, the gel consists largely of a negatively-charged polymer that gets loaded up with positively-charged antibiotics.

The idea is that surgeons would dip artificial joints into a bath of the gel for a few seconds, leaving a lattice-like array of tiny “microgel” flecks on the surface of those implants – each fleck is only about one one-hundredth the width of a human hair. The joints would then be quickly dunked in an antibiotic bath, allowing the microgels to absorb that medication.

After the artificial joint had subsequently been implanted in the patient’s body, the microgels would hold onto their antibiotic payload until a bacterium approached. The electrical charge of that microbe would then cause the medication to be released, killing the bacterium. In in vitro tests, the gels proved to be quite robust, remaining stable and active for weeks.

Besides keeping bacterial biofilms from forming on implants, the technology would also allow relatively small doses of antibiotics to be applied and activated only where needed. By contrast, orally-administered antibiotics affect the whole body – this means that higher doses of them are required, potentially leading to the development of antibiotic-resistant bacteria.

Additionally, the microgels react to the presence of both active and dormant bacteria. Some other antibiotic implant coatings are triggered by byproducts of the microbes’ metabolic activity, so they miss bacteria that have gone dormant.

The scientists are now working on gaining government approval for the technology, and on finding industry partners to develop it further. It’s possible that the gel could also be used on items such as artificial heart valves or sutures.

“It only takes one bacterium to cause an infection,” says Libera. “But if we can prevent infection until healing is complete, then the body can take over.”

Mighty microgels could make for safer implant surgery [New Atlas]

In Nordic countries, where cold winters can keep people indoors, buildings often feature what are known as “displacement ventilation” systems. A Spanish study now suggests that such technology may also help keep patients from acquiring infections while in hospitals.

By contrast, displacement ventilation systems introduce new air from vents located near the floor, closer to the patient. That fresh air then pushes the old air up to removal vents near the ceiling, with the warmth of that old air helping it to rise. Described as working like a piston, this system results in a quicker, more thorough turnover of air within a room.

In U Cordoba tests that incorporated life-size mannequins equipped with simulated human respiratory systems, it was found that use of displacement ventilation systems in hospital rooms could significantly reduce airborne pathogens.

Additionally, the technology may help hospitals save money on power bills. While mixed ventilation systems are required to be capable of refreshing the air in a room 12 times per hour in order to sufficiently remove pathogens, it is estimated that displacement systems would only need to do so nine times an hour for the same level of protection from infections.

Nordic-style ventilation could reduce hospital-acquired infections [New Atlas]

Creative agency Virtue has partnered with Copenhagen Pride to develop the world’s first genderless voice, which aims to eradicate bias in technology.

Called Q, the gender-neutral voice provides an alternative to technology such as Siri, Alexa and Google Assistant that all use female voices for home and service-orientated roles.

“As voice-assisted platforms become more pervasive in our lives, technology companies are continuing to gender their voice tech to fit scenarios in which they believe consumers will feel more comfortable adopting and using it,” said Virtue.

“A male voice is used in more authoritative roles, such as banking and insurance apps, and a female in more service-orientated roles, such as Alexa and Siri,” explained the agency.

Five voices were recorded and modulated

To begin with, Virtue worked together with Anna Jørgensen, a linguist and researcher at the University of Copenhagen, to define what is meant by a gender-neutral voice.

Five voices that were not immediately associated with either male or female binaries were recorded. These were then modulated using a special voice software that uses Jørgensen’s research on gendered voices to “neutralise” the voices.

The modulated voices were tested in a Europe-wide survey with over 4,600 participants, where people were asked to rate each voice on a scale of one (male) to five (female).

After settling on a single voice, the design studio modulated the voice and tested it until it was perceived as gender-neutral by participants.

“Technology should be rooted in new cultural truths, rather than antiquated ones. Using data and insights from our global network, we identified a unique opportunity to progress a medium of technology becoming more pervasive in our everyday lives,” said Virtue’s Ryan Sherman and Emil Asmussen.

Gender-neutral voice could be used in public spaces

“Q represents not the voice of one, but the voice of many who are fighting for a future inclusive of everyone,” they continued.

Q was launched on 11 March 2019 at SXSW in Austin, Texas.

Although an AI framework has not yet been developed to use the genderless voice in practice, Virtue and Copenhagen Pride are working towards implementing it across the technological spectrum.

They hope to find a place for it “not only in voice-assisted products but also as a voice for metro stations, games, theatres and beyond”.

High-profile technology organisations are still displaying a gender bias against women. Earlier this year, organisers at CES revoked an innovation award from female-led sex toy startup Lora DiCarlo’s hands-free robotic massager for being “immoral”.

Q is the world’s first gender-neutral voice technology [Dezeen]

That’s right, folks, the pizza giant has just cooked up yet another way of getting pie, and it can all be done in just a few taps.

Keen to satisfy the desires of drivers who have a sudden hankering for an Ultimate Pepperoni or Cali Chicken Bacon Ranch, the pizza giant teamed up with vehicle software company Xevo Market on an app that will let you order pie through your car’s infotainment system.

Using the in-car touchscreen, the quickest way to get the job done is by selecting the Easy Order option — essentially your default pizza — or your most recent order if it’s something different and you fancy the same again. After that, you just have to decide whether you want to have it delivered to your home after you get back, or if you want to drop by Domino’s to collect it yourself. In the case of the latter, the app can direct you to the relevant Domino’s Pizza in case you’re in an unfamiliar area and you can’t find it yourself.

Domino’s new ordering feature will be automatically loaded with millions of cars via the Xevo platform starting in late 2019.

“At Domino’s, we want pizza ordering to be simple and always within reach, no matter where a customer happens to be,” Chris Roeser, director of digital experience at Domino’s, said in a release.

Brian Woods, Xevo’s chief marketing officer, said his company was “excited” about the partnership with Domino’s, adding, “Xevo Market makes it possible for Domino’s to reach people directly in their cars, streamlining mobile ordering to help busy consumers save time.”

Domino’s aficionados will already have clocked that the new in-car ordering process is part of the pizza company’s AnyWare platform that makes it super-simple to order pizza using a slew of devices and methods. It could be an emoji tweet or a text, or the press of a button on Domino’s Easy Order button. You can ask Alexa, use your smartwatch, or tell your smart TV. Yes, Domino’s appears to be doing everything in its power to prevent you from getting through a whole day without ordering one of its cheese-topped greasy wheels.

Domino’s in-car infotainment app lets you order pizza on the drive home [Digital Trends]

In the developing world, about 10 percent of medications are actually inferior counterfeits of the real thing. Unfortunately, these same regions often lack the equipment needed for detecting those fakes. An inexpensive new system, however, could remedy that problem.

Developed by a team at the University of California-Riverside, the setup requires users to start by placing a liquid pharmaceutical sample on a glass slide, where the liquid is drawn into a series of parallel microfluidic channels. One end of that slide is then placed in liquid nitrogen, creating a temperature gradient along the length of the slide.

A simple USB camera shoots video of the procedure. It records the manner in which the sample freezes, separates into different components, or otherwise reacts to the temperature-change over time (and based on distance from the nitrogen). Bitmap stills are then created from that video. Those stills are known as “chronoprints.”

Utilizing freely-available software created by the researchers, a user can then compare the chronoprints of a sample to those of the genuine medication that the sample is claimed to be. If it’s a counterfeit, it will have reacted differently to the change in temperature, so its chronoprints won’t match those of the real thing. The program will alert the user to that fact.

The technology could also be used on solid medications dissolved in water, or even on food. In fact, in lab tests, it was successfully used to differentiate between authentic and adulterated olive oil.

“By basically converting a chemical sample to an image, we can take advantage of all the different image analysis algorithms that computer scientists have developed,” says lead scientist Asst. Prof. William Grover. “And as those algorithms get better, our ability to chemically identify a sample should get better, too.”

Easily-obtained “chronoprints” could detect bogus medicine [New Atlas]

Engineers at the University of Washington have developed a robot that can feed people who struggle to feed themselves. Powered by an artificial intelligence algorithm, the system detects pieces of food on a plate, stabs them with a fork, and transports the morsels to a person’s mouth.

The project was first motivated by a trip that Siddhartha Srinivasa, UW engineer and lead researcher, took six years ago to a rehab institute, where a young girl asked him to develop a robot that would let her eat by herself. After meeting with caregivers and other people with mobility impairments, Srinivasa and his team recognized the broad need for an autonomous feeding system and set out to create one.

“The system is a robot arm … integrated with a wrist-mounted camera, a tactile sensor on the fingers, and a fork gripped by the two fingers,” Gilwoo Lee, a UW doctoral student who worked on the project, told Digital Trends.

When in use, the arm, which is mounted on a user’s wheelchair, prompts its user to select an item that she would like to eat. The system then performs some calculations, running data through a set of algorithms, to determine the food type and “skewering position,” or the angle at which the arm should stab the food. Through trials, Lee, Srinivasa, and their colleagues found out that the act of eating often entails orienting various foods differently on a fork. For example, stabbing a strawberry near the tip and tilting it towards the person’s mouth helps them eat it more easily.

“Based on the food identity and the skewering position, the robot moves down to skewer an item, executing the most successful skewering strategy tailored for each item,” Lee explained. “Once the item is skewered, the arm moves around to deliver the food to the person sitting in the wheelchair. During this time, the camera keeps detecting the person’s face and delivers the food close to the mouth. The system then waits until the person has taken a bite or eats the whole food, and then repeats.”

The researchers are testing their technology with caregivers and patients in assisted living facilities to make it more fit users’ needs more precisely.

A paper detailing part of the project was published recently in IEEE Robotics and Automation Letters.

A German research team has prototyped an extraordinary heating/cooling system that stresses and unloads nickel-titanium “muscle wires” to create heated and cooled air at twice the efficiency of a heat pump or three times the efficiency of an air conditioner. Crucially, the device also uses no refrigerant gases, meaning it’s a much more environmentally friendly way to heat or cool a space.

The device is based on a peculiar property of certain shape-memory metal alloys that spring back into shape after being deformed. In some cases – particularly with nickel-titanium, also known as nitinol – these metals absorb significant amounts of heat when they’re bent out of shape, and then release that heat when they’re allowed to revert to their normal shape. The difference between the loaded wire and the released wire can be as much as 20° C (36° F).

The cooling device is thus quite simple in concept. It uses a rotating cylinder covered in nitinol wire bundles. The wires are loaded up as they pass through one side, sucking heat out of the air and storing it up. Then as they rotate past the other side, they’re allowed to snap back into shape, dumping the heat on the second side. Air is blown through chambers on each side, giving you one feed of heated air and another feed of cool air.

The Saarland University team has been experimenting with the device to figure out the optimal convergence of wire loading, rotational speed, and how many wires should be in a bundle to create the biggest possible heat differential between the two sides from a given energy input.

And the results seem very exciting. The Saarland University team claims that “the heating or cooling power of the system is up to thirty times greater than the mechanical power required to load and unload the alloy wire bundles” depending on the type of alloy used. They say that this makes their new system more than twice as good as a conventional heat pump and three times as good as a conventional refrigerator.

“Our new technology is also environmentally friendly and does not harm the climate, as the heat transfer mechanism does not use liquids or vapors,” says Professor Stefan Seelecke, the university’s Chairman of Intelligent Metal Systems. “So the air in an air-conditioning system can be cooled directly without the need for an intermediate heat exchanger, and we don’t have to use leak-free, high-pressure piping.”

The idea of achieving an air cooling or heating effect simply by bending and unbending little pieces of metal sounds bizarre. What about metal fatigue? How long will those alloy wires last in these variable temperature environments before they get brittle and break off?

Well, that’s one area where nitinol is very different from other metals. Indeed, it’s most famous for its use in medical devices, particularly implantable ones such as stents, where its remarkable flexibility allows a stent to bend, crush, stretch and twist with the artery as the body moves.

Verdict Medical Devices summed up nitinol’s fatigue properties thus: “it is by far the most resistant metal known to high-amplitude strain-controlled fatigue environments. Several applications … employ nitinol solely because of its extraordinary fatigue resistance.”

The cooling device described above would appear to fit the description of a high-amplitude, strain-controlled application. But, of course, it’ll be up to the researchers, or a future commercial team, to demonstrate how long you can expect your nitinol reverse cycle air conditioner to last.

It certainly looks like an exciting piece of technology, with the capability to reduce energy use in heating/cooling as well as taking refrigerant gases out of the equation.

Refrigerants not required: Flexible metal cooling prototype demonstrates extreme efficiency [New Atlas]

 

In our increasingly noisy world, it can be hard to find some quiet time. Now, a team of mechanical engineers at Boston University has developed a new device that is specially designed to block up to 94 percent of incoming sound waves, while still letting air pass through.

Current technology can be pretty effective at blocking sound. The walls of concert halls or recording studios are stuffed with thick cladding and large cavities to muffle outside noise, and newer metasurfaces could work the same way in a fraction of the space. But either way, that’s not going to allow much airflow. So the Boston team set out to design an acoustic metamaterial that could effectively block sound without blocking the passage of air.

“Sound is made by very tiny disturbances in the air,” say the researchers. “So, our goal is to silence those tiny vibrations. If we want the inside of a structure to be open air, then we have to keep in mind that this will be the pathway through which sound travels.”

Their design is a 3D-printed, ring-shaped device that’s made to some very precise mathematical standards. The shape is specifically designed to interfere with incoming sound waves and send them bouncing back the way they came. Material in the outer ring coils around like a helix, reducing the sound that can pass through the open center.

To test the device, the researchers placed a prototype in the end of a PVC pipe, and hooked a speaker up to the other end. The speaker blasted a tone through the pipe, but from the outside it was inaudible to the human ear. When the metamaterial was removed, the tone was suddenly reverberating through the room. According to the researchers, the device was able to block 94 percent of the sound.

“The moment we first placed and removed the silencer … was literally night and day,” says Jacob Nikolajczyk, co-author of the study. “We had been seeing these sorts of results in our computer modeling for months – but it is one thing to see modeled sound pressure levels on a computer, and another to hear its impact yourself.”

To test the device, the researchers placed a prototype in the end of a PVC pipe, and hooked a speaker up to the other end. The speaker blasted a tone through the pipe, but from the outside it was inaudible to the human ear. When the metamaterial was removed, the tone was suddenly reverberating through the room. According to the researchers, the device was able to block 94 percent of the sound.

“The moment we first placed and removed the silencer … was literally night and day,” says Jacob Nikolajczyk, co-author of the study. “We had been seeing these sorts of results in our computer modeling for months – but it is one thing to see modeled sound pressure levels on a computer, and another to hear its impact yourself.”

Acoustic metamaterial silences sound without blocking airflow [New Atlas]

Wireless charging is increasingly common in phones and other devices, but it’s still held back by a very short range – usually the device needs to sit on top of a charging pad, which cancels out some of the benefits of going wireless. In a new step towards truly wireless charging, engineers have developed an ultra-thin device that captures Wi-Fi signals and converts them into electricity.

The new system is based on existing devices called rectifying antennas, or rectennas. These capture AC electromagnetic waves in the air – such as Wi-Fi signals – and convert them into DC electricity. But most of them are rigid and, being made with silicon or gallium arsenide, are best suited to powering small electronics. So the team on the new study set out to develop a new rectenna that’s flexible enough to be scaled up to much larger sizes.

For the new design, the team made the rectifier – the component that converts the current – out of molybdenum disulfide (MoS2). This semiconducting material measures just three atoms thick, making it extremely flexible while still holding its own in the efficiency department. The team says the MoS2 rectifier can capture and convert up to 10 GHz of wireless signals with an efficiency of about 30 percent. That’s much higher than other flexible designs, and the researchers also say it’s faster.

That said, it doesn’t quite stack up against other rectifiers, which can reach efficiencies of up to 60 percent. It’s also generating a relatively small amount of electricity, producing about 40 microwatts from about 150 microwatts of Wi-Fi power. Although that isn’t much, it should be enough to power small wearable or medical electronic devices, removing the need for batteries.

The team hopes that these disadvantages will be outweighed by the other benefits of the new design, including its flexibility and scalability.

“What if we could develop electronic systems that we wrap around a bridge or cover an entire highway, or the walls of our office and bring electronic intelligence to everything around us?” says Tomás Palacios, co-author of the study. “How do you provide energy for those electronics? We have come up with a new way to power the electronics systems of the future – by harvesting Wi-Fi energy in a way that’s easily integrated in large areas – to bring intelligence to every object around us.”

Next up, the team is planning on improving the efficiency and building more complex systems.

Two-dimensional antenna converts Wi-Fi signals into electricity [New Atlas]

How much more fun could drones be if you got fiddly hand controllers out of the way and flew them with your mind? That’s the question EEGSmart poses with its UDrone mini-quad, which responds to brainwaves and head movements instead of thumbsticks. It’s not perfect, but it does give a glimpse of a mind-controlled future.

The Udrone itself is fairly unremarkable; it’s a lightweight mini-quadcopter with 2-inch props, nice plastic bumpers to save it from damage when it bumps into a wall, and an 8-megapixel, 1080p-capable camera. You can fly it using your mobile phone, in which case it works like most similar small quads, but also has some smarts under its belt with face tracking, subject tracking and gesture recognition.

It flies for six or seven minutes on a battery, which is about right for this size of thing. The camera isn’t anything to write home about, but it streams video back to your phone in real time as long as you’re within Wi-Fi range. So far, so ordinary.

In a second box, you get EEGSmart’s UMind Lite headset, and here’s where things get interesting. The headset has a number of sensors built in. There’s an EEG, or electroencephalography sensor, which measures electrical activity in the brain. There’s an EOG, or electro-oculography sensor that measures eye movements by monitoring the electrical potential between the front and back of the human eye. There’s an EMG, or electromyography sensor, that measures electrical activity in response to a nerve’s stimulation of muscles. It also has gyros and accelerometers, and patented gear built in to amplify signal and squash noise from the finicky brain and nerve sensors.

It charges via USB, like the drone itself, and sits over the ears across your forehead, just above your eyebrows. You pair it to your phone through the UDrone app, and then set the drone into “mind control mode” to activate it.

To launch the thing, you have to attain a state of Jedi-like focus. Which is fine, I’ve been doing my Sam Harris meditation tapes. You can watch your mental focus activity summarized into a number in the UDrone app. If your thoughts are a little skittery, you might find that number hovering around 15 or 20. When you zen out into a space of relaxed focus, it rockets upward. I’ve seen as high as 400 or so, which made me feel like Yoda.

To launch the drone, you pop it into mind control mode, then focus your way to 150 or more, and the drone lifts off to a chest-high hover. Focusing hard on the drone can convince it to rise, letting your mind wander makes it fall, in little stepped levels.

To move it, you tilt your head. This feels extremely intuitive for right-to-left movements, and works really well; the drone tilts whichever way you tilt your head. EEGSmart has decided, however, to reverse things for forward and backward flight – probably thinking that you want to be looking up rather than down as your drone is flying forward. I thoroughly disagree with this assessment and would much rather the drone simply tilted whichever way I tilt my head.

Yaw control is done by turning your head sideways then back again, and this happens in 45-degree increments. You blink twice to start the camera timer, and clench your jaw when it’s time to land.

After burning through half a dozen battery charges, I’m definitely getting the hang of it. Altitude control is by far the hardest and least responsive control, since it’s difficult to know exactly when you’re building or shedding focus, but the drone does eventually do what you want. The head tilt control works great – it’d be even better if the forward/backward inputs weren’t reversed – and while the camera does fire off a lot of shots without me asking for it, I’ve done pretty well with the landing command. Here, check out a short video. Pardon the bare feet, I’m a hippie at heart:

It’s a pretty nifty feeling controlling a drone this way. It does suffer from being very digital –especially the stepped altitude changes and 45-degree turning implements, which are not a smooth way to fly. But it does give you a real sense of what hands-free flight could feel like, and as such we’d rate it a fun little toy to have around.

It’s quick enough to learn that you can pass it around for visitors to play with, and the prop guards do a good job stopping this thing from banging off the walls. You’ll want to fly it indoors, too, because wind does blow it around a bit, and that can be hard to correct for without the rock solid thumbstick controls you’d normally be using.

I do see a future in this kind of thing. I think UDrone should build some sort of training feature into the app, which lets you watch your control inputs in real time so you can make sure of exactly what signals you’re sending. That’d make the learning curve quicker without burning battery on the drone. It’s a pretty remarkable little gadget to play with, and I look forward to seeing where this kind of tech goes.

Traditionally, welding has been limited to materials that share similar properties, so it’s tough to make even aluminum and steel join forces. But now, scientists from Heriot-Watt University are claiming a breakthrough method that can weld together materials as different as glass and metal, thanks to ultrafast laser pulses.

Currently metals can be welded to metals and glass to glass, but the two don’t mix well. They require different temperatures to melt and they expand differently in response to the heat. There are other manufacturing methods to get them to stick together, but they aren’t quit as neat.

“Being able to weld glass and metals together will be a huge step forward in manufacturing and design flexibility,” says Duncan Hand, director of the EPSRC Centre for Innovative Manufacturing in Laser-based Production Processes, which developed the new technique. “At the moment, equipment and products that involve glass and metal are often held together by adhesives, which are messy to apply and parts can gradually creep, or move. Outgassing is also an issue – organic chemicals from the adhesive can be gradually released and can lead to reduced product lifetime.”

The new technique works on optical materials such as quartz, borosilicate glass and sapphire, which can now be welded to metals like aluminum, stainless steel and titanium. The key to the process was an infrared laser that fires pulses on the scale of a few picoseconds – trillionths of a second.

“The parts to be welded are placed in close contact, and the laser is focused through the optical material to provide a very small and highly intense spot at the interface between the two materials – we achieved megawatt peak power over an area just a few microns across,” Hand explains. “This creates a microplasma, like a tiny ball of lightning, inside the material, surrounded by a highly-confined melt region. We tested the welds at -50° C to 90° C (-58° F to 194° F) and the welds remained intact, so we know they are robust enough to cope with extreme conditions.”

The team is now working with specialists to develop a prototype laser processing system, so the method can be commercialized for manufacturing.

Breakthrough process welds metal and glass together using ultrafast lasers [New Atlas]

While some businesses or research facilities have a constant need for a cleanroom, there are others that only require one once in a while – or perhaps they sometimes need to set one up in the field. It was with such scenarios in mind that the CAPE portable cleanroom was created.’

Developed by Germany’s Fraunhofer Institute for Manufacturing Engineering and Automation, CAPE features a tubular CFRP (carbon fiber-reinforced polymer) frame that can be set up by hand using aluminum connectors. A “tent” consisting of a fabric roof, walls and floor is then hung within that frame – a 3 x 3 x 4-meter model (9.8 x 9.8 x 13.1 ft), can reportedly be set up in half an hour.

Once CAPE’s fabric door flap is sealed, and an included custom ventilation/filtration system is hooked up and activated, the environment inside the tent is said to meet ISO class 1 air-cleanliness standards. This means that within each cubic meter of air, there are no more than 10 particles that are over 0.1 micrometers in size.

The fabric can be washed and sterilized between uses, plus it doesn’t off-gas any chemical emissions. Additionally, although it won’t let airborne particles pass through, it is air-permeable and electrostatically conductive, promoting even air flow between the tent’s interior and its surroundings. That said, by raising or lowering the air pressure within the tent, it’s possible to either keep the air inside from permeating out, or to keep outside air from flowing in.

CAPE is being made in a variety of sizes, ranging from a reach-in 30 x 30-cm model (11.8 x 11.8 inches) up to a biggie measuring 11 x 12 x 7 meters (36 x 39.4 x 23 ft). The system packs down into an aluminum case for storage and transport.

According to Fraunhofer, possible fields in which the product could be utilized include chip manufacturing, medical technology, the food industry, satellite assembly, and the automotive industry. It could even conceivably be used as a mobile operating room, set up at disaster sites.

The technology will be on display next month, at the Hannover Messe trade show.

CAPE provides an on-the-go clean space [New Atlas]

Companies around the world are quickly discovering that the cloud is allowing them to serve their customers better than ever before — and saving them big money in the process.

While there are countless mega-corporations leading the charge, smaller players are also finding their niche by “targeting enterprise companies seeking reliable places to host high-end software tools and startups looking for cheap, easy, and reliable solutions.”

Indeed, one of the most significant ways in which businesses can realize success with the cloud is through partnering with a cloud-based call center provider for support-related and other services. Here’s a look at how businesses can leverage the cloud to grow their operations and bolster sales.

Understanding Cloud-Based Call Centers

Cloud-based call centers employ agents who work remotely, serving your customers on an as-needed basis. Because these operations are flexible and scalable, your business could add or subtract the number of agents as call volume shifts.

Additionally, through this technology, you get first-class contact center operational reliability, while the power of the global cloud helps you keep up-to-date with the latest customer engagement technologies. Customer satisfaction is boosted. Plus, the costs tend to be much lower than setting up a call center on your own, and you have no learning curve to manage.

Maintaining Your Workplace Culture

Setting up a successful cloud contact center starts with picking the right provider. While you’ll need the expertise of experienced call center professionals, you also will need to rely on an outsourcing vendor that can incorporate members from your team.

By selecting employees from amongst your own ranks to maintain and grow your cloud contact center efforts, you can make sure your cloud contact center is a true representation of your company and its culture, instead of a generic service that doesn’t provide any additional value.

Picking a Cloud Call Center Provider

Technology counts, too. As you begin to look into cloud contact center providers, you’ll want to select an outsourcing vendor that will help you take your customer care to the next level. While you don’t have to worry about the hardware your cloud contact center provider will use — that’s on them — you should take a careful look at the software they will use to serve your customers.

Call Recording

The first technology to look for is a call recording system. Whether you plan to make one or a combination of inbound or outbound calls, this feature allows you to listen to the calls agents are handling on your company’s behalf. Call recording is an integral part of quality assurance. It should also play a role in coaching agents and training them.

IVR Capabilities

Finally, consider whether your cloud contact center provider offers IVR capabilities. This technology collects information from callers and helps route them to the right department. The former is important because it gives you insights into your customer base and can help you generate a database of customer information you can access at any time. Meantime, the ability to route customers to the right department can help bolster your brand experience by reducing customer frustration and enhancing the speed of issue resolution.

Some IVR platforms can do even more, like being configured to measure the success of your advertising efforts, and they can save you money by providing a system for your customers to resolve specific issues without ever having to speak to a live agent. Some IVR systems can also offer context across communication channels, predict the intent of each call, and provide proactive outreach.

Choosing the Right Provider for Your Business Needs

Adding a cloud contact center to your operations can help your business in a myriad of ways, as long as you choose the right provider. Ideally, you need a call center partner that will preserve your company’s culture while working to improve customer satisfaction. Selecting a cloud contact center provider that offers the right combination of technologies will help you reach your call center performance goals and learn more about your customers.

Nottingham University researchers have received £902,524 funding from the Medical Research Council to develop the smart wound dressing that uses the sensors to assess whether affected tissue is healing well or is infected.

The dressing could have a significant impact on patient care and healthcare costs for wound management, which stand at £4.5-£5.1bn a year, which is over four per cent of the NHS budget.

Diabetic foot ulcers represent nearly £1bn of this cost and these wounds will be the initial focus of the project. Better wound monitoring has the potential to reduce the 7,000 lower limb amputations that affect people with diabetes in England each year.

The optical fibre sensors in the dressing remotely monitor multiple biomarkers associated with wound management such as temperature, humidity and pH, providing a more complete picture of the healing process.

“At present, regular wound redressing is the only way to visually assess healing rates, however this exposure can encourage infection, disrupt progress and creates a huge economic burden on NHS resources. Instead our technology will indicate the optimum time to change the dressing and send out an alert if intervention is required with infected or slow-healing wounds to improve patient care and cut the number of healthcare appointments needed,” said Professor Steve Morgan, director of the Centre for Healthcare Technologies and Royal Society Industry Fellow at the University.

Developed and validated by the Centre in laboratory tests, the proposed sensors will be fabricated in very thin (~100um diameter), lightweight, flexible, low-cost optical fibres. This versatile platform will then be incorporated into fabric that will look and feel the same as a conventional wound dressing.

The dressing will be connected to a standalone, reusable opto-electronic unit to constantly evaluate the wound’s status. The unit will transmit and receive light to and from the sensors; relaying information to the patient and clinicians. This will be achieved by wireless transfer linked to a mobile phone.

Although the dressing will cost marginally more than the average dressing, the higher initial cost will be offset by fewer dressing changes or clinical visits and reduced healing time.  A 10 per cent reduction in costs associated with visits and appointments would provide £300m annual savings to the NHS alone.

The project team includes researchers from University Hospitals of Derby and Burton NHS Foundation Trust, Nottingham University Hospitals Trust, and industrial partners Footfalls and Heartbeats (UK).

Smart bandage embeds optical fibre sensors for improved wound care [Medgadget]

An innovative new temperature sensitive fabric will adjust how much heat it holds based on the wearer’s body heat.

The fabric was created by a team from the University of Maryland using fibers of two different material—one that repels water and one that absorbs it. The fibers were then coated in carbon nanotubes and woven to form the fabric. When a liquid, such as sweat, contacts the fibers, they will clump together and leave gaps in the fabric. At the same time, the movement of the carbon nanotubes closer together changes the electromagnetic coupling so that the nanotubes absorb 35 percent more infrared radiation.

Temperature Sensitive Fabric Adjusts to Body Heat [IdeaConnection]