Originally shared by Randy Smith
A new startup is finding a way to grow crops indoors economically in the context of our current supply chain infrastructure, and with more tasty and nutritious varieties than are currently available through that infrastructure. Several things are coming together to let them do this, including substantial drops in the cost of LED lights, machine learning for placement of towers and lamps, and vertical planting allowing use of gravity to distribute water rather than pumps. The dense production (much much more produce per ft^2 than farms) allowing them to put production centers very convenient to grocery distribution centers, getting the produce to groceries much faster. That in term allows them to use varietals that are optimized for taste and nutrition instead of shelf stability (and just getting them to stores faster improves the nutrition). And being indoors means that they can minimize pests to the point where they can control them with ladybugs, avoiding pesticides.
I think there are a lot of implications to this, many positive, some disturbing.
+ It sounds like this is riding several technology curves (LED light, machine learning, IoT), so it's only going to get more efficient.
+ It's all technology all the time (the plants roots aren't even in dirt, but a plastic growth medium made from recycled bottles), which may give it an "eww!" factor, but I suspect does produce nutritious, clean plants.
+ As it evolves, this technique could substantially raise the carrying capacity of the planet, which is good because AIUI convention farming with fertilizers depletes the soil and I've been concerned that'll take us to a place where we suddenly have no ability to feed the people on the planet.
+ However, the same result means we'll have less incentive to get a handle on our population growth. (Though simply getting countries through the demographic transition to wealthy societies will help here.)
+ And the same thing gives us much less incentive to take care of the environment.
So: Modified rapture?? :-} :-J
https://www.fastcompany.com/40420610/has-this-silicon-valley-startup-finally-nailed-the-indoor-farming-model
https://www.fastcompany.com/40420610/has-this-silicon-valley-startup-finally-nailed-the-indoor-farming-model
Monday, May 22, 2017
Sunday, May 21, 2017
Google, A.I. and the rise of the super-sensor
Originally shared by Mike Elgan
Google, A.I. and the rise of the super-sensor
(Read my column: https://goo.gl/Xz61sO )
Google dazzled developers this week with a new feature called Google Lens.
Appearing first in Google Assistant and Google Photos, Google Lens uses artificial intelligence (A.I.) to specifically identify things in the frame of a smartphone camera.
Google Lens is shiny and fun. But from the resulting media commentary, it was clear that the real implications were generally lost.
The common reaction was: "Oooh, look! Another toy for our smartphones! Isn't A.I. amazing!" In reality, Google showed us a glimpse of the future of general-purpose sensing. Thanks to machine learning, it's now possible to create a million different sensors in software using only one actual sensor -- the camera.
In Google's demo, it's clear that the camera functions as a "super-sensor." Instead of a flower-identification sensor, a bar-code reader and a retail-business identifier, Google Lens is just one all-purpose super-sensor with software-based, A.I.-fueled "virtual sensors" built in software either locally or in the cloud.
And that's not the only super sensor Google is involved with.
This is a new world:
http://www.computerworld.com/article/3197685/internet-of-things/google-a-i-and-the-rise-of-the-super-sensor.html
#supersensor
http://www.computerworld.com/article/3197685/internet-of-things/google-a-i-and-the-rise-of-the-super-sensor.html
Google, A.I. and the rise of the super-sensor
(Read my column: https://goo.gl/Xz61sO )
Google dazzled developers this week with a new feature called Google Lens.
Appearing first in Google Assistant and Google Photos, Google Lens uses artificial intelligence (A.I.) to specifically identify things in the frame of a smartphone camera.
Google Lens is shiny and fun. But from the resulting media commentary, it was clear that the real implications were generally lost.
The common reaction was: "Oooh, look! Another toy for our smartphones! Isn't A.I. amazing!" In reality, Google showed us a glimpse of the future of general-purpose sensing. Thanks to machine learning, it's now possible to create a million different sensors in software using only one actual sensor -- the camera.
In Google's demo, it's clear that the camera functions as a "super-sensor." Instead of a flower-identification sensor, a bar-code reader and a retail-business identifier, Google Lens is just one all-purpose super-sensor with software-based, A.I.-fueled "virtual sensors" built in software either locally or in the cloud.
And that's not the only super sensor Google is involved with.
This is a new world:
http://www.computerworld.com/article/3197685/internet-of-things/google-a-i-and-the-rise-of-the-super-sensor.html
#supersensor
http://www.computerworld.com/article/3197685/internet-of-things/google-a-i-and-the-rise-of-the-super-sensor.html
Friday, May 19, 2017
Great news!
Originally shared by Al Gore
Great news! Last Saturday, the California grid received a record-breaking 67% of electricity from renewables. http://ow.ly/MNO430bSkt8
http://ow.ly/MNO430bSkt8
Great news! Last Saturday, the California grid received a record-breaking 67% of electricity from renewables. http://ow.ly/MNO430bSkt8
http://ow.ly/MNO430bSkt8
Wednesday, May 17, 2017
A new study has revealed how human activity has been affected the near-space environment, creating a bubble that...
Originally shared by Universe Today
A new study has revealed how human activity has been affected the near-space environment, creating a bubble that protects against space radiation.
http://www.universetoday.com/135590/might-new-way-push-back-space-radiation/
A new study has revealed how human activity has been affected the near-space environment, creating a bubble that protects against space radiation.
http://www.universetoday.com/135590/might-new-way-push-back-space-radiation/
Tuesday, May 16, 2017
Since our last update, we’ve made a lot of progress testing our 600 kW energy kite.
Originally shared by Makani
Since our last update, we’ve made a lot of progress testing our 600 kW energy kite. Recently, we generated power with the kite for the first time. It’s the culmination of two years of work scaling up each of the kite’s component systems, testing each system individually, and running thousands of hours of flight simulations to predict how our kite would operate in the real world. No matter how many preliminary tests and simulations you run, it’s always exhilarating to bring a new technology out into real world conditions. Check out this video of Makani’s 600 kW energy kite generating electricity for the first time. This was the first of many tests we are doing with our latest prototype to gain a better understanding how the kite interacts with the wind and how our controls handle different wind conditions.
https://www.youtube.com/watch?v=An8vtD1FDqs&feature=autoshare
Since our last update, we’ve made a lot of progress testing our 600 kW energy kite. Recently, we generated power with the kite for the first time. It’s the culmination of two years of work scaling up each of the kite’s component systems, testing each system individually, and running thousands of hours of flight simulations to predict how our kite would operate in the real world. No matter how many preliminary tests and simulations you run, it’s always exhilarating to bring a new technology out into real world conditions. Check out this video of Makani’s 600 kW energy kite generating electricity for the first time. This was the first of many tests we are doing with our latest prototype to gain a better understanding how the kite interacts with the wind and how our controls handle different wind conditions.
https://www.youtube.com/watch?v=An8vtD1FDqs&feature=autoshare
Saturday, May 13, 2017
Saturday, April 29, 2017
2 years from sleeping during commute...oh, and regulatory approval
Originally shared by Noble
2 years from sleeping during commute...oh, and regulatory approval
“November or December of this year, we should be able to go from a parking lot in California to a parking lot in New York, no controls touched at any point during the entire journey.”
https://electrek.co/2017/04/29/elon-musk-tesla-plan-level-5-full-autonomous-driving/
2 years from sleeping during commute...oh, and regulatory approval
“November or December of this year, we should be able to go from a parking lot in California to a parking lot in New York, no controls touched at any point during the entire journey.”
https://electrek.co/2017/04/29/elon-musk-tesla-plan-level-5-full-autonomous-driving/
Bullet vs. Glass

Originally shared by Colin Sullender
Bullet vs. Glass
Prince Rupert's Drops are beads of toughened glass created by dipping molten glass into cold water, which causes it to solidify into a tadpole-shaped droplet. As the glass cools from the outside in, it produces significant compressive stresses on the surface and tensile stresses at the core of the drop. The spherical shape of the bulbous head gives the glass enormous strength, such that it can be struck with a hammer without breaking. However, the tail is extremely fragile and causes the entire droplet to explosively shatter when cracked.
The SmarterEveryDay YouTube channel has a series of videos examining the behavior of Prince Rupert's Drops using high-speed cameras. Here a .38-caliber bullet can be seen disintegrating upon impact with the head of the droplet without damaging the glass.
Source: https://youtu.be/F3FkAUbetWU (Smarter Every Day)
#ScienceGIF #Science #GIF #Glass #Bullet #Impact #Stress #Mechanical #Strength #Stress #Molten #Drop #Droplet #PrinceRupert #HighSpeed #Camera #SmarterEveryDay
Tuesday, April 11, 2017
YES!!1!
YES!!1!
Originally shared by Vilmar Simson (Ves)
http://www.blog.google/products/chrome/taking-aim-annoying-page-jumps-chrome/
Originally shared by Vilmar Simson (Ves)
http://www.blog.google/products/chrome/taking-aim-annoying-page-jumps-chrome/
'Neuron-Reading' Nanowires Could Accelerate Development of Drugs for Neurological Diseases
Originally shared by Neuroscience News
'Neuron-Reading' Nanowires Could Accelerate Development of Drugs for Neurological Diseases
A team led by engineers at the University of California San Diego has developed nanowires that can record the electrical activity of neurons in fine detail. The new nanowire technology could one day serve as a platform to screen drugs for neurological diseases and could enable researchers to better understand how single cells communicate in large neuronal networks.
The research is in Nano Letters. (full access paywall)
#nanotech
http://neurosciencenews.com/nanowires-neurology-neurons-6388
'Neuron-Reading' Nanowires Could Accelerate Development of Drugs for Neurological Diseases
A team led by engineers at the University of California San Diego has developed nanowires that can record the electrical activity of neurons in fine detail. The new nanowire technology could one day serve as a platform to screen drugs for neurological diseases and could enable researchers to better understand how single cells communicate in large neuronal networks.
The research is in Nano Letters. (full access paywall)
#nanotech
http://neurosciencenews.com/nanowires-neurology-neurons-6388
Saturday, April 1, 2017
They may not be visible but we rely on many algorithms to run our complex, technological society.

They may not be visible but we rely on many algorithms to run our complex, technological society. Up until recently, algorithms have been created and tested by humans. Recently, computer-generated algorithms are coming into their own. Humans still need to test them but even that is going to be (already is?) done automatically.
As usual, to get the full effect of a Yonatan Zunger post you have to read the comments over there.
Originally shared by Yonatan Zunger
This line gives me a sick feeling in the pit of my stomach. That feeling you get when something hits the mark way too well.
(ETA: To highlight a point that David Cameron Staples made in depth in a comment, the key word of both of these lines is "just." It's the use of the algorithm, or the orders, as an excuse to deny responsibility for one's own actions.)
Subscribe to:
Posts (Atom)
In 1976 (yes, 1976), I heard my professor, one Don Norman, say pretty much the same thing.
In 1976 (yes, 1976), I heard my professor, one Don Norman, say pretty much the same thing. https://www.fastcompany.com/90202172/why-bad-tech...
-
https://www.theverge.com/platform/amp/2018/4/2/17182132/nuclear-bomb-blast-simulator-outrider-nuke-map-war-imagery
-
Still thinking through this, myself. This seems a useful place to start. Originally shared by Allen “Prisoner” Firstenberg What can should ...