Monday, May 14, 2018

You can see the cavitation bubbles form then collapse.


You can see the cavitation bubbles form then collapse.

Originally shared by Colin Sullender

Cavitation in a glass bottle

It's one of the oldest party tricks: Blowing the bottom off a glass bottle simply by hitting it on the top. When struck from the top, the bottle easily moves downwards while the liquid does not because it has higher inertia. This results in the formation of a near-vacuum environment at the bottom of the bottle (the bubbles). Because this area is now empty, the liquid moves downwards with the full pressure of our atmosphere (101 kPa) through a process called inertial cavitation. The resulting shockwave easily causes the glass to shatter.

Source: https://youtu.be/lj3x2U4CaEs (The Slow Mo Guys)

#ScienceGIF #Science #GIF #Cavitation #Pressure #Slow #Mo #Motion #SlowMo #SlowMoGuys #Vacuum #Inertia #Inertial #Shockwave #Shatter #Glass #Bottle

Thursday, May 10, 2018

Still thinking through this, myself. This seems a useful place to start.


Still thinking through this, myself. This seems a useful place to start.

Originally shared by Allen “Prisoner” Firstenberg

What can should we du-plex?

A lot of people have been talking about Duplex, which Google discussed at I/O yesterday. Duplex is a conversational technology that will let Google make phone calls on your behalf to do things like scheduling appointments, or finding out if a store is open on a holiday, or reserving a table. That phone call is driven by an AI system that sounds very human, both in how it sounds and the mannerisims it exhibits, and can interact with the human on the other end of the phone in a very humanly conversational manner.

This human-like behavior has, quite justifiably, freaked people out.

Jake Weisz was the first person who called me on it in a comment on an earlier post of me. And he made some very valid points. Particularly that it was demonstrated just after demonstrating making very realistic digital models of a specific human's voice. Lauren Weinstein later made similar arguments, drawing upon a sci-fi movie to illustrate some of the concerns. It isn't too difficult to create scenarios where this technology can be used in unethical and manipulative ways. Jake flatly suggested that the technology should be verboten. Lauren suggested that it only be allowed after a clear warning that a user is talking to a machine.

I understand the concern. I can conjure some pretty distressing scenarios too. But I'm concerned that taking a heavy handed approach will significantly reduce the usefulness to the people who can most benefit from it.

What do I mean by usefulness? Let's put it this way. Traditionally, when companies want to exchange information online, both sides have to define a data format to do the exchange. Many times, this is a rigid format, but it needs to change as the world evolves. If you're a small company or a home business, this can be a serious expense. Even if you're a big company, it can be an issue to maintain all that data.

With this technology - Google has said we can use a "data format" that has been around for a very long time and can quickly be updated: the human on the other end of the phone. So instead of creating a new data format and API that they expected everyone to fall in line and follow... Google is using our data format and API. Cost when data changes? Minimal. Cost to implement? Most are implementing it already.

Google is meeting us on our terms. Think about that.

But what about abuse of the technology? By Google or Cambridge Analytica Next Generation?

A really good point.

Lauren suggests that a clear statement up front that this is an advanced robocall is the way to go. And while he is probably right that if companies don't do this voluntarily that legislation to force them to do it (or to ban it) will be right around the corner, I think this can cause a chilling effect. People will be afraid to talk to a machine, or may want to manipulate the machine, but won't try to do the same if they think it is a human.

This is a very human reaction. "Out of sight, out of mind." It was a situation we faced with Glass. I was wearing it last night, when someone commented to me that people where he was from reacted very badly to the camera on Glass. Where was he from? London. Where there are more CCTV cameras per square kilometer than anywhere else (perhaps except China). But they don't see the cameras, so they don't fear them. But if they see a camera, they're suddenly afraid of it.

But we still need a way for this not to be abused.

I think there are some things that can be reasonably done to address concerns:

(1) Require (voluntarily or otherwise) that the Agent clearly identify, up front, exactly who it is acting on behalf of. This is a normal action for human agents ("I'm calling on behalf of...").

(2) That person have done something specific to trigger the Agent to do this specific call on their behalf and the Agent has acknowleged to them that it is making a phone call.

(3) That the Agent report to them, in some way, when the call has been completed. (I considered requiring the recording to be available to the human, but that starts getting into wiretap laws. Still, I think it should be considered.)

I think we should also look to equivalent scenarios today. If the CEO asks his personal secretary to schedule a haircut... what does that secretary say and do? If a person uses a TTY Relay service, does the relayer identify themselves to both parties? (I figure the former does just what I describe. I don't know, and couldn't find the answer to, the latter - but none of the TTY relay etiquette guides I found say they do.)

Google has, quite wisely, opened the conversation. In more than one sense of the word. I think it's good that they've done so, and that people are talking about all aspects of it. Let's keep talking.

In 1976 (yes, 1976), I heard my professor, one Don Norman, say pretty much the same thing.

In 1976 (yes, 1976), I heard my professor, one Don Norman, say pretty much the same thing. https://www.fastcompany.com/90202172/why-bad-tech...