Friday, October 09, 2015

LED Torchiere Conversion

Twenty-three years ago, my wife and I moved into an apartment with very little built-in lighting, and purchased a couple of torchiere floor lamps to put in the living room and office. Their 300 watt quartz-halogen lamps provided a powerful, white light when turned up, and could be dimmed to a warm glow for quieter evening settings.

Twenty years ago, we moved into the house we now own, and placed the two torchieres in the living room, making for nice, soft indirect lighting on either side of our two chairs.

Perhaps 15 years ago, we picked up a couple of wall-mounted Quartz-halogen projector lamps in the "as-is" section of the Burbank IKEA. I mounted these as reading lights on the shaft of the torchieres (one for each of us), removing the torchieres' failing internal dimmers and routing the reading lights' wires through the now-available dimmer control opening and down through the core of the torchieres' poles to their DC power "bricks." I added external, remote-controlled AC dimmers for the torchieres.

Some years after that, I mounted rear satellite speakers for a surround-sound audio system high on the poles of each of the torchieres flanking our seats, firing rearward and slightly inward to reflect off the wall behind. The speaker wires joined the AC wires feeding the torchiere lamps, and DC wires powering the reading lights.

I've been slowly converting our home lighting from incandescent to LED (and very thankful never to have been faced with the ugly colors of fluorescent lighting). I'd noodled solutions for replacing the two 300 watt torchiere bulbs with lower-power LEDs for more than a year, researching available LEDs and power supplies (I still wanted dimmable lighting, which is a bit more exotic for LED driving circuits). Of late, there have been an increasing number of BIG, postage stamp-sized LEDs which produce light output adequate to replace any household application. I'd almost committed to buying separate components and fabricating a heat-sink/cooling system, when I stopped to look more closely at a LED ceiling down-light conversion kit at Costco. After some thought, I bought the $27 kit, containing two fixtures promising "same output as 120 watt" incandescent reflector flood lights, and 1,250 lumens of light output from only 21.5 watts. Manufacturer quotes of output of the 300 watt Quartz lamps in our torchieres quote nearly 5,000 lumen output, however, that's omnidirectional - light radiates in all directions from the glowing filament. LEDs output most of their light perpendicular to a plane, which is exactly where I want the light going - up into the ceiling. So I thought I might get away with a lower rating to achieve similar light levels.

Once home, I removed the diffusing lens (just breaking it off, then discovering that I missed two screws to remove it reversibly, which turned out to be moot) screwed the medium-base bulb adapter (the familiar threaded light bulb mount for the U. S.) into a work light socket and fired one of the fixtures into the ceiling at the same height as the torchiere lamps. It wasn't bad - about the same output as one of our torchieres which has an old, darkened lamp which was probably putting out half as much light as it could. But it was disappointing next to a freshly-relamped instrument.

I considered just going with the reduced light level, but then decided that I might be able to Siamese the two LED fixtures by sawing about 1/4 of each of their housings off. They would both then fit (sort of) within the upturned shade of the torchiere after I surgically removed all the original socket and reflector parts.

Further exploration and experimentation revealed that I could free the array of 18 LEDs and the steel disk to which they are bonded - I realized it wasn't glued as I first suspected, but merely stuck by silicone heat sink grease. The hockey puck-sized dimmable power supply was easily separated, though I had only about 1/3" of wire with which to solder on four connections between the power supplies and LEDs.

A couple of hours wandering around Home Depot yielded some galvanized steel plates (made for joining construction lumber), pop rivets, some threaded rod and a Plan.

I riveted the LED arrays from both of the fixtures on my fabricated mounting plates/heat sinks (with yet more heat sink grease), and tucked the power supplies underneath. The whole assembly floats above the torchieres' shallow, upturned bowl-shaped shades on two threaded rods.

Before (dimmed very low for photo)

After (also dimmed low)

When I fired it up, I was thrilled to find that the combined output of the 43 watts of LEDs equals or exceeds that of the brand new 300 watt quartz-halogen bulb, and the pattern on the ceiling and walls is perfect. The lights dim as hoped, though there is little to no color change (having warm colored lighting at night may be a better idea to prevent unwanted wakefulness), and the light levels abruptly change at some points - which matters not. Power consumption will drop to 1/7th of the original, and in the summer, the air conditioner will have to contend with over 500 fewer watts of heat in the living room at night.

The original halogen light on the left, and the LED on the right - the dimmers are set at only about 30 per cent, but the LED exhibits none of the reddish color change of a filament bulb.

Success! Our 20th-Century lamps are now 21st, and will continue to serve as reading lights and Surround Sound satellites for years to come.

Attending the 2015 AltCar Expo

My wife and I have attended several of the annual meetings of the AltCar Expo in Santa Monica, California.

Here are some comments I wrote for my old friends who are gear-heads.

Attending the 2015 AltCar Expo

Friday, June 19, 2015

The 2015 DARPA Robotics Challenge Finals - What Was It?

On June 5th and 6th, 2015, the Finals of the DARPA Robots Challenge were held at the Pomona Fairplex in Pomona, California. This was the culmination a three-year competition in which international teams developed hardware and software to complete a series of human-world tasks in a disaster/rescue environment. Organized and funded by the United States’ Defense Advanced Research Projects Agency, the competition was inspired by the aftermath of the 2011 Fukishima Daiichi nuclear powerplant disaster, and was intended to spur development of tools to aid in crisis events which present high-risk to humans. (When existing robots were deployed and utilized to evaluate the extremely dangerous areas of Daiichi site, it became evident that many critically-useful tasks could not be performed by existing robots.)

In previous years, DARPA held the DARPA Grand Challenge and the DARPA Urban Challenge, which pitted competitors against each other and against off- and on-road obstacle courses with motorized vehicles (which were mostly modified existing motor vehicles originally built for humans).

We attended the last day of the Robotics Challenge Finals, and what we witnessed was surprising and impressive, if not for obvious reasons.

Admission to the event was free. Trying to manage our time, we spent little time perusing the impressive amount of static exhibits laid out on the tarmac outside the Fairplex’s horse-racing stadium. Once we saw what was actually happening inside the stadium, we chose to prioritize watching the active competition.

I think it was better than I was relatively unprepared about the event. I’d only very briefly skimmed some documentation about the event - enough to know that a parameter of the competition was “intermittent periods of wireless connectivity” between the human controllers and the robots. This was meant to represent real-world conditions in which robots on disaster site might lose communications contact because of structural impedance or because of radio-frequency interference from rescue operations or equipment malfunctions.

I think my wife Joni and I were both surprised at the scale of the presentation, and the polish of the event. It was extremely well produced. Crowds were well-managed. The event’s graphics were well-placed and informative. Extremely elaborate multi-camera video and professional announcing and field reporting work provided a polished video feed to attendees and Internet viewers. Food and facilities (an easy call for the Pomona Fairplex, which hosts many large events annually, including the month-long, 1.5M visitor Los Angeles County Fair) were very good. I remind you that this was an exhibition of the Defense Advanced Research Projects Agency, the part of the United States military complex which a half-century ago created the embryo of what would become the Internet, and also any number of known and unknown defensive and offensive military technology projects. It’s a government project, and that’s not often a recipe for a Good Time.

In the huge covered grandstands at the Fairplex’ horse-racing track, we found seats in the bleachers, and took in the setting: In front of a crowd of a couple of thousand spectators, five stages were constructed. Flanking a center stage for human presentations, four identical three-walled “sets” were constructed (with an open side theatrically facing the audience) as obstacle courses. The robots were presented with eight tasks working in an environment with human ergonomics, and could score a single point for each challenge for a maximum of eight points. The total time required to finish the (optional) number of tasks they chose to complete was recorded. Final places were based upon total points achieved and lowest time required, and the top three finishing place teams were awarded $2 million, $1 million, and $500 thousand respectively. It should be noted that (as with the previous Grand and Urban Challenges) the costs of developing, purchasing and constructing the robots for several of the teams exceeded the the $2M first-place prize (I think I heard a $1M price tag quoted for one of the 3rd-party robots used by several teams), so for many teams, prize money was not a goal in and of itself. As with previous DARPA Challenges, teams were created and sponsored by commercial industry, government agencies, and educational institutions. Expense, creativity and innovation varied accordingly. The twenty-three teams featured competitors from six countries: Japan, Germany, Italy, Republic of Korea, China and the United States.

Multiple tracks defined teams which received some or no funding from DARPA.

Different from previous DARPA Challenges, teams were not required to design their own robots. Several teams chose to acquire existing robots from other enterprises, and modified or designed their own software for autonomous and human control. Among the 3rd-party robots purchased by teams, the most visibly recognizable were the six Atlas robots from Boston Dynamics. Many readers will know Boston Dynamics from YouTube videos of their “Big Dog” quadrupedal walking robot. (Boston Dynamics was purchased by Google in December 2013.) Another popular choice for teams not developing their own robots was the Robotis Thormang.

During the 2013 DRC Trials, robots were allowed to be mechanically tethered to protect them against damage during a fall, and were allowed to use an umbilical cable to provide external power and data connection. For these Finals, robots were disallowed from either of those aids. They would have to carry all the power required to operate for the 60 minute maximum task period onboard, and would have no safety system in the event of a fall (the final Task is a climb of 4 steps, and some of these mostly top-heavy robots weigh in excess of 180 kg/396 lbs, so they can really do themselves some damage). Most, but not all of the robots are bipedal walking robots, primarily because of the assumption that they would be the best solution for navigating in spaces and over obstacles designed for human beings. There are notable exceptions, like Team Aero’s Aero DRC, with four legs which also have wheels, and NASA Jet Propulsion Labs’ aptly-named RoboSimian, which contorts its four long multi-jointed appendages not unlike a tree-dwelling monkey to whatever configuration is appropriate for completing the current task safely.

Highlights from the 2013 DARPA Robotics Challenge Trials, showing safety-tethered and umbilical-connected robots attempting tasks.
Perhaps the most challenging change between the 2013 Trials and the Finals regards remote control and autonomy. For the Trials, robots could be constantly controlled by a physical data connection - wires. In the Finals, not only would all communications between teams and robot be wirelessly communicated, but the communication links would be deliberately degraded intermittently by DARPA organizers to simulate real-world communications problems at rescue sites. To this end, the expectation was that teams would develop software running internally within the robots that allowed them to complete the competition tasks without constant human help. According to DARPA video team interviews, teams typically “drive” robots to the task area and then initiate an autonomous routine to complete the task. Using multiple camera vision and laser 3-dimensional scanners, the robots would have to identify targets: a doorway, a door handle, a valve wheel, handheld power tools, stair steps, a shape on a plywood wall to be cut around - and manipulate the items in their environment to complete the current task. I think that had teams opted NOT to develop autonomous software, they might still be able to attempt every task manually. But because they wouldn’t know when and for how long DARPA organizers would interrupt their communications connection, their hopes of either achieving competitive times or even completing the stage within the maximum 60 minute time were unlikely.

I’m not certain, but I believe that the human team members were only able to use their robots’ sensing mechanisms to make control decisions from their command centers. So even if they chose to manually perform some tasks, they depended upon having designed adequate sensing infrastructure to apprehend the robot’s environment. So if they had only supplied their robot with a single camera view, it might prove difficult to guide a manipulator with no reference as to depth. We saw brief glimpses of 3-dimensional visualizations of objects on team computer monitors which reveals the nature of the robots’ “vision” systems. Most, if not all of the robots use LIDAR laser scanning systems (often visible as spinning devices mounted high on the robot) to provide their robots with a 3-dimensional model of both its environment and itself - robots must “look” at their own manipulator arms and graspers to determine where they are relative to their environment. (Here’s is a sample of this footage in the middle of a 4-hour DARPA video of the event.)

It should be noted that not all robots and teams attempted all tasks. Teams were allowed to choose to bypass some tasks, if the best use of their available resources with respect to the defined tasks was to skip a potential point-earning task in exchange for much lower elapsed total time. This might completely eliminate significant complexity in the design of the robot, and for those teams not expecting to place first, might improve their finishing positions by avoiding robot- or time-killing tasks. (It’s worth noting that the top three finishers were also the only teams to earn the maximum eight points.)

Teams were allowed to declare a “reset” on any given task, which (I think) earned them a 10-minute time penalty but allowed them to attempt the earn that point again.

The Tasks

  • Drive Task (1 point) -  the robot drives a utility vehicle over course of about 100 feet, with two traffic barriers creating a chicane about 2/3 of the way down the path. The robot is NOT responsible for getting into the vehicle - only to steer and operate the throttle pedal. Notably, several of the larger robots were “seated” in the passenger seat and operated the controls on the driver’s side - presumably because they would otherwise not clear the steering wheel. The track for this driving task ran the width of the fairground’s horse racing track, so the four competition courses covered a larger area than the simulated task rooms. This proved surprisingly difficult, and there were a lot of human team members pushing the vehicles back from interactions with barriers.

UNLV’s “Metal Rebel” completing the Drive Task successfully. 
(NOTE: You can view any of these videos full-screen by clicking on the button in the lower-right corner of the YouTube video windows.) 
  • Egress Task (1 point) - the robot must exit the vehicle and move about six feet away across a goal line. Teams were allowed to quickly modify the vehicles (without tools) to assist their robots in this task. Some extended simple structures out of the sides of the vehicles to serve as “handrails,” or other temporary stabilization points, and one team threw out a box on ropes to serve as a helper step for their walking robot (not all robots walked, some had wheels and tracked belts). This Task was particularly risky, and I heard a comment somewhere that teams didn’t appreciate the jeopardy that this Task presented to their robots early in the 60-minute completion window. Crashing here (and we saw some big crashes stepping out of vehicles) could prevent a team from earning more than the single Drive Task point.

Here’s Worchester Polytechnic Institute’s and Carnegie Mellon University’s joint entry “Warner” rehearsing Egress the week before the event. If this seems slow, know that some robots took 10-15 minutes to complete one task - most of which was standing in one place and “thinking.”
  • Door Task (1 point) - the robot must open the door (with a lever-style handle), push the door open, and walk across the threshold to gain 1 point. Sounds simple, but we witnessed several failures - including big “crashes” - right here. Many ‘bots stood at the doorway for several minutes, apparently assessing the position of the door and handle.

Team NEDO-JSK’s “Jaxon” earns a point by walking through a doorway. There’s nothing like hearing thousands of people cheer because someone walks through a doorway. (I’d accidentally written that as “walks through a door,” which might have been more than a typo at this event.)

Tartan Rescue’s “CHIMP” demonstrates that its unique design allows it to recover from a crisis in the Door Task which would have been a failure for many other designs.
  • Valve Task (1 point) - this task was specifically inspired by events in the wake of the Fukishima disaster. A wheel-type control valve handle must be rotated counterclockwise 360 degrees to earn a point. This is trickier than it sounds. When we do a task like this, we’re unconsciously shifting our weight on our feet to compensate for the forces we impart into the wheel.
UNLV’s Metal Rebel wins a point at the Valve Task.
  • Wall Task (1 point) - in this impressive challenge, robots select from any of four handheld (human) cordless cutting tools on a wall shelf (most robots knocked off at least one other tool during the task - which was not penalized). They then must completely cut a painted shape out of a section of drywall. All robots simply dropped the tool on the floor upon completion of the task, revealing that there was no specification for the final disposition of the tool. Most robots managed this pretty well. A few missed the painted target slightly, cutting slightly through the circumference of the black circle. But all that we saw managed to take a second cut and earn the point.

See NASA JPL’s RoboSimian make the cut to a cheering crowd in the Wall Task.
  • Surprise Task (1 point) - the “surprise” of this task is that DARPA didn’t inform teams what the task was until the day before the competition. However, I just found video of a team rehearsing what would be the Surprise Task back in March 2015, so I guess that suggests that it was a surprise choice from a known collection of tasks. The surprise task turned out to be unplugging an industrial electrical cord and plug from one outlet at “chest height,” and plugging into another. If we pulled a plug at this height, we would unconsciously have to shift our weight backward and forward to prevent from tipping as we pulled and pushed on the plug or cord. For some of the bipedal robots we watched, this tipping proved disastrous.

South Korean Team KAIST (who would go on to win the Finals and $2 million) practices the Plug Task months before the Finals.
  • Rubble Task (1 point) - to complete this task, the robot must either cross over an uneven arrangement of cinder blocks (these were meticulously identical on each of the four competition stages), or negotiate an alternate obstacle: an assortment of loose “debris” on a smooth floor. They were allowed to simply pick up the debris, but no robot we saw attempted that strategy. Notably, we saw one robot simply push its way through the debris (each piece of debris weighs less than 5 pounds) to cross the yellow line earning it the point for the task, though it did risk catching both ends of some long pieces, which might have prevented it from continuing to roll across the smooth floor.

Team IHMC Robotics “Running Man” attempts unsuccessfully to negotiate the Rubble Task.
  • Stairs Task (1 point) - for the last possible point-earning task, robots could climb four steps to a railed platform. There was a handrail on one side of the stairs. Strategies for climbing these stairs varied from the human-like approach of many frighteningly top-heavy bipedal designs to the monkey-like climb of NASA JPL’s RoboSimian. 

Team Tartan Rescue’s “CHIMP” uses the unique tracked belts on its four limbs to complete the final task for the maximum 8 points to win the $500K third-place prize.

Running Man” shows us all just how NOT to climb the stairs. Despite this fall and the Rubble Task fall on Day 1 of Finals, Team IHMC affected repairs overnight and to compete on Day 2 and finished in 2nd place overall.

A full description of the eight robot tasks is here on DARPA’s overview page for the Robotics Challenge Finals.

Our Takeaway

We were delightfully surprised at the efforts of the Defense Advanced Research Projects Agency to share this project with taxpayers. The event was well run, and the presentation was watchable and captivating. Watching robots slowly attempting each task was thrilling to watch - the sense of risk was not dissimilar to that of an Olympic athlete risking a lifetime of devoted practice and development, only to lose it all in a single fall. Hearing a crowd of people cheering and "Oh!"ing in unison at almost imperceptible accomplishments of these sometimes lifelike machines was unexpected, and added considerably to the experience.

Having watched online videos of the progress of walking robots over the past decade, we were still impressed at the state of the art - no doubt advanced by this competition. It wasn't so much about locomotion, but the apparently autonomous operations. Ironically, because of pauses which sometimes lasted several minutes, it was apparent that we were seeing software routines attempting to interpret the conditions of the environment with which the robots were presented. And the success rate of the competing robots shows how those teams rose to the occasion.

Will tomorrow's robots be better-suited at helping humans to perform perilous tasks in the near future? Most certainly. Moreover, it's easy for us to imagine that in the very near future, we might be seeing machines capable of adapting and reacting to their environment in ways that resemble living animals.

I'm looking forward to it.

More DARPA Robotics Challenge Finals Videos

Time lapse of the winning 44 minute, 28 second, 8-point run of Team KAIST’s “DRC-HUBO,” earning the Daejeon, South Korean team a $2 million prize.

Here is a time lapse video of TEAM IHMC Robotics “Running Man” completing their entire 50 minute, 21 second, 8-point run for a $1 million 2nd-place prize.
Here is DARPA’s YouTube channel, on which they have posted an 8 and half hour video of Day 1’s competition, and almost 9.5 hours of Day 2 (your Tax Dollars at work).

The video directors of the DARPA Robotics Challenge (which was being streamed live to the Internet, as well as to the five huge video displays behind the competition stages) played this hilarious blooper reel from the previous day’s competition even while the last dozen teams were agonizingly tending their robot’s last-ever competition day. It seemed a little cruel to those teams who had suffered these indignities, but I don’t blame the organizers for trying to make the event entertaining - the audience (and we) were in rapt attention during the challenge stages, and oohed, ahhed, gasped, laughed and applauded with more enthusiasm than they might have at many human-based events.
Boston Dynamics demonstrated a couple of their "Spot" quadrupeds - stunning

Monday, June 15, 2015

Comments about eBay article: 10 “Before Their Time” Technology Devices

(Several friends shared this eBay article on facebook. What started out as a "Comment" grew to, well, this.)

Original eBay article: '10 “Before Their Time” Technology Devices'

Oh my, yes, I know all these items well, save the Polavision . . .

Apple Newton & Palm PDA

I still regret not collecting a Newton somewhere along the way.

I used Palm OS for almost a decade. I can probably do 50-60 words per minute in Graffiti(TM). I carried a cable in my pocket so I could Web browse via a connection through my Motorola StarTac (the best cell phone ever) and RAZRs (with a mild hack to allow them to act as a modem.

Mattel Power Glove

I attended all 24 continuous hours of the Cyberthon virtual reality convention in San Francisco (not sleeping from waking in L.A. until I was on the plane to fly back from SFO 40 hours later), at which I saw Mattel PowerGlove co-developer and VR evangelist Jaron Lanier speak. I also used many "goggle and glove" immersive VR rigs using Lanier's VPL Data Gloves, which measures the amount of bend in each finger joint by measuring the change in light refraction in optical fibers which were slightly cut at the joint. 

Sony Digital Cassette Players/Recorders

Never owned a DAT recorder, but told anyone who would listen of the Recording Industry Association of America's efforts to thwart what they perceived as a dangerous vector for music piracy. DAT recorders were delayed introduction to the U.S. thanks to the RIAA's lobbying efforts. The RIAA also mandated a federal trade restriction that DAT recorders sold in the U.S. were required to incorporate technology to detect and honor commercial recordings that were "marked" with a notch filter (a tiny part of the sound spectrum was artificially removed from recordings - if the CopyCode-enabled DAT recorder saw no audio level in that narrow band of frequencies, it would refuse to go into Record mode). DAT technology never succeeded in the consumer marketplace, but saw some application in professional audio production. 

These strategies are still practiced by the movie and music industries, which is why we never got a replacement for VCRs, and you can't really share that story you saw on television with your grandfather. He'll just have to wait and see if anyone cares to offer it for sale, and if so, he'll have to get a broadband Internet connection, a credit card and an HDTV to see that program, because only a monitor with HDCP technology will convince his new streaming media player that he's not trying to pirate a digitally-perfect copy of that story about Korean War ships by inserting some data capture device between the streaming box and TV. You and your grandpa look like pirates to the MPAA and RIAA. If you read the fine print of those 40-page user agreements you've been checking "yes" to, you'll find that the music, movies and TV shows you've been "buying" on steaming services certainly aren't "yours," and the next time you want to listen to or watch one of your purchases, you may find its no longer playable for reasons that mean nothing to you.

Lest you think that your cable/satellite DVR fulfills the functionality of your old VCRs, the MPAA has poked its paranoid nose in there as well. If you haven't encountered it yet, cable and satellite providers have incorporated infrastructure into their business agreements, data networks and your DVRs so that you can be prohibited from recording any given show on your DVR, and already recorded shows can be rendered unplayable.

If I'm ranting, it's because we're being mistrusted by entertainment industry heads who wrong-headedly believe that it's possible to stanch the losses of media piracy by globally impeding the cultural and humanitarian benefits of communications technology. The truth is that because digital media can be infinitely and perfectly copied, only a single determined pirate has to be successful to pirate intellectual material. In reality, thousands of clever profit-oriented pirates are picking away at any given time at whatever weaknesses they can exploit, and those of us who are actually paying money that goes to MPAA and RIAA members and signatories for our entertainment content - we get treated like criminals and the notion of Fair Use is further eroded.

Polaroid Polavision

It strikes me as odd that this is the only product on this list which doesn't ring a bell, since I started shooting experimental Polaroid stills and making Super-8 movies in 1971 or so, and became fascinated with the instant-playback promise of the $1,795 1/4" AKAI videotape recorder being sold in the 1971/2 Lafayette Radio Electronics catalog. My production life began with my family giving me a Norelco Philips cassette recorder around 1967 (almost the year that format debuted), so just the mention of "videotape" was enough to explain what that might mean to the 11 year old filmmaking me. 


Before it was "MSN TV," WebTV offered a way for citizens to have Internet email and rudimentary Web browsing without owning "a computer." Obviously a purpose-built computing appliance itself, WebTV used the consumer's television as a display device. This presented a familiar experience to other TV-connected devices (i.e., VCRs, video game consoles, DVD players), and made using the Internet a non-threatening, "family room" experience. But as the content of the World Wide Web became richer and more sophisticated (in the first years of the Web, there weren't even images), trying to view Web pages became more and more challenging. Microsoft bought WebTV in the late '90s at which point it had over 800,000 subscribers generating over a billion dollars of annual revenue. But personal computers would win out over Internet appliances (too bad, really - no one should have to maintain a computer if all they do is email and facebook), and MSN TV was shuttered at the end of 2013. 

Coleco Electronic Quarterback

"Coleco Electronic Quarterback?" Hey, Mattel Electronic Football was first, and while it didn't offer passing and kicking, it should get the mention. I took my Electronic Football along on the Campbell College Jazz Band tour in 1979. I'd hear the strangely syncopated "Charge!" tune play from somewhere on our former Greyhound bus as my band mates passed it around during our tour of eastern North Carolina high schools. Here's me demonstrating the game when my friend Riley sent me one in December 2012 (I still have my original as well):

Dragon NaturallySpeaking

Dragon NaturallySpeaking was the best attempt at machine speech-to-text in a consumer product to date when it debuted in the late '90s. But for me, a pretty fast touch-typist, it was far more mentally constipating to learn to speak my thoughts perfectly as separate words (most of us use a lot more "ums" and "uhs" than we realize until it's transcribed by a computer), and far too much work to edit after the fact than composing on a keyboard. Today, some of the best speech-to-text gets done remotely on massive computing platforms after streaming your audio somewhere else in the world over an Internet connection (something I anticipated over 20 years ago, before we had the Internet in our pockets), rather than getting done locally on your computer. When Joni was in college, one of her professors was interested in machine speech detection, but said that the task of parsing "connected speech" - actual conversation where the starts and stops of each word overlap - was devilishly difficult. Her professor wasn't wrong. It took three decades of exponential performance improvements - millions of times the computer speed - as well as intensive research to achieve the amazing speech recognition that's at our fingertips today. As with a human brain (but still not nearly as good), modern speech recognition systems try to guess what the likeliest words to follow already interpreted words would be. And Google even knows where you are and what else you've been searching for recently, so it also applies that to its guesswork. If you always wanted to talk into the air to your computer assistant, it's pretty much here. 

Diamond Rio MP3 Player

I bought a Diamond Rio 500 MP3 player - the third MP3 player product from that company. I played a little music on it, and used it to listen to Audible audiobooks (Rios were among the first devices to support Audible's encryption scheme). But when we got the first iPod (the beautiful if bulky 5GB model with a hard drive and FireWire connectivity), there was no going back. Actually, I dug the Rio out of retirement briefly to play podcasts while at the gym, but it again got bumped by the superior experience of a tiny iPod nano. 

AT&T Videophone 2500

Forty-eight years ago, I attended the Expo '67 World's Fair in Montreal, Quebec, Canada. Inside the iconic U.S. Pavillion, a massive Buckminster Fuller geodesic sphere, I talked to my father (50 feet away) on an AT&T Picturephone. In retrospect, the exhibit hardly represented the immense amount of money AT&T are purported to have spent on developing the technology. The demo was just two phones and two closed-circuit cameras and CRTs. Pushing a button on your phone caused you and your caller to see only themselves - a nod to the notion that we're not always, uh, presentable. Over the decades, I never really missed that particular Vision of Tomorrow from Yesterday. I would often hear people proclaim, "Where's my Picturephone," but I couldn't really imagine too many of my acquaintances  caring to participate. I played with videoconferencing on personal computers in the early '90s. At a time when maintaining a dialup data connection could be challenging, successful video chat was, to say the least, frustrating. One user would have picture but no sound, and the other would hear your sound playing back many times slower than normal time. So you were still listening to a short sentence of them saying, "disconnect and I'll call you on the phone" (at normal pitch, but each syllable sounding for seconds) two minutes after they'd given up after three busy signals. I know families separated by great distances who video chat regularly. And when Joni and I have been traveling separately during the last decade or so of well-developed video chat, it's been wonderful just to look at each other wordlessly. We set up video chat cameras on our moms' computers (and even on a "smart TV"), but after the novelty wore off, we discovered that you eventually stop looking at each other in the second or third hour of conversation, and we haven't video chatted with them much in years. Today, even having a Picturephone in my pocket doesn't  play a part in my daily life.

Ubiquitous access to videoconferencing did provide an experience I found worth sharing back in 2007.

"What's possible" and "what exists" have been passionate topics for me as long as I can remember. I grew up in the glorious Space Age and was inundated media mythology of the spy-wiz technology of the Cold War (some of which was true, and much of which has come to pass). In these times, when so much technological might has become everyday magic for the uncurious masses, I still Want To Know. 

Tuesday, May 19, 2015

Advanced Searches in the Mac Finder

Ever wanted to find everything in a Mac's folder except something, like "everything except folders named '_notes'?" You can actually do Boolean searches in the Finder, but in their typical fashion, Apple hasn't made this "power user" feature obvious, apparently to avoid confusing the typical casual user.

This Macworld article explains how to make the most of Advanced searches in the Finder.

Sunday, April 05, 2015

Camera Sensor Size & Crop Factor Demonstration

I created a short video to graphically demonstrate the consequences of exchanging lenses (assuming compatible physical mounting systems) between cameras with image sensors of different sizes.

Camera Sensor Size & Crop Factor

Thursday, March 05, 2015

How Does Facebook Know My Browsing History?

Yesterday (March 2, 2015), in a private email response to a friend about a movie actor with which I’d worked, I found a still image of that actor in that movie on Google Image Search. I went off on a tangent about doing lighting in that very scene, and (as I am wont to do) when I made a reference to a specific kind of motion-picture lighting instrument, I looked up that instrument in a Google search, thinking that the friend would be interested in seeing more about the specific device. Not able to find the actual manufacturer’s web page for an “LTM Pepper 100 Watt Fresnel Tungsten Light,” I copied the URL for the top Google match, a page from the online store for photography retail giant B&H Photo in New York City. However, I forgot to include the link in the email (which I didn’t discover until doing some sleuthing today).

So at this point yesterday afternoon, all I’d done was visited that B&H Photo web page, and written and sent some email and NOT included a link to this web page.

Today, when I first looked at facebook on my phone, I saw (see screenshot) the now-familiar image of an LTM Pepper 100 Watt instrument in a thumbnail of a B&H Photo page, and facebook informed me that two of my Friends had “Liked” “B&H Photo Pro Video Audio.”

What? I’d never communicated about that lighting instrument on facebook - only in private email. I looked at my previous day’s email to see where I’d included the link to the B&H LTM Pepper page, and discovered that I’d forgotten to even include the link in that email.


Part of what was at work to produce this result was an HTTP cookie from my web browser. What are cookies? They are little bits of text, stored in a place accessible to the user’s web browser by a website. When the user re-visits a site, the site requests that the browser look for its own cookies in that storage place. What do they store? Mostly information that makes visiting their website a better experience for the visitor. Preferences about which language you’d like to view, whether you’ve visited before, and which of their pages you’ve already view are among the many possible bits of information cookies can store. Websites promise theoretical anonymity about user identity in cookies, and attempt to avoid inclusion of information which users might find an invasion of privacy. (See “More Information About Cookies” below for links to help you control your browser’s behavior with cookies.) Because users can view the contents of these cookies themselves (procedures vary, search for something like “view cookie contents ”), there’s the promise of transparency about what kind of information is shared, and there is risk in giving the impression that user's personal data might be revealed to others (even though that happens all the time).

This was the cookie that the B&H Photo site stored on my computer. (This was viewed in Google Chrome browser at chrome://settings/cookies.) 
I was searching for garbage disposal splash guards earlier this
week, and they just showed up on this tech site's ad insert  
You may have seen an ad for a specific kind of item you’ve previously shopped for online popping up in a frame on another site. Indeed, as I was just researching to write this article, a web page popular technology site displayed an Amazon ad with the very garbage disposal splash guards I was perusing on the Amazon site this week. That’s clearly a customized “drop in” ad which only I will see, and was generated in only two or three seconds (amazing) between the time I requested that page and it actually displayed in my browser. In this case, Gizmodo has deliberately put code in their web page that lets Amazon run their own web browsing session in a little window. From within that session, Amazon can access the cookies they set on my computer in previous visits to their site.


So how does facebook know about something I searched for on my browser? Well, until somewhat recently, how data collected from your online activities crossed between different websites depended upon both of those sites having common ownership, or one selling cookie information and the other buying cookie information, or both sites subscribing to services which aggregated and shared cookies between subscribing businesses.

What’s new here is that facebook made it look like my two friends had an interest in exactly the item to which I had simply browsed on the B&H website at some time in the past, without my posting about it on facebook or even knowing that B&H had a facebook page.

I just wrote the two friends to ask when they "Liked" the B&H page, and whether they actually saw the LTM Pepper page in their facebook feeds. I’ve only gotten one response so far, but he Liked the B&H facebook page no more recently than a year ago, and had never seen the page with the LTM lighting instrument.

Facebook has been trying to make the most of combining its position as the world's largest social-networking site with targeted advertising. Using only the "Like" feature with commercial facebook pages, they claim to have had substantial success in profiling their users for some characteristics, and attempt to use those profiles to present "appropriate" advertising to each user. But among other things, many users don't contribute Likes in their fb activities.

In 2014, facebook began employing a new mechanism to track the web browsing activities of users outside of facebook. It can’t see everything a user does with their browsers - among the techniques employed, facebook takes advantage of a mechanism used for their advertisers to know if people are visiting their website because of facebook ads. So it only works for certain sites. But facebook is so big that few online businesses can resist advertising with them, and therefore using their tracking system.

So just as with the Amazon garbage disposal splash guard ads I saw on a 3rd-party site, I’m the only one seeing the LTM Pepper page at the B&H Photo website on my facebook Mobile wall (I saw no sign of this B&H link on the desktop version of fb). But when facebook finds a correlation between information gleaned from my browsing history with any of my social connections who have "Liked" a commercial site, it looks like my Friends are providing testimonials for an item which I’ve viewed on the Web. In fact, neither of those two fb Friends ever knew about the LTM Pepper; that I had visited the B&H site; or about each other.

Hey, facebook is a for-profit, publicly-traded company, and have a duty to their stockholders to make money. We live in a capitalistic society. Still, in pursuit of a more effective way to serve their advertisers, changes like these which could potentially seem “creepy” or invasive to their users seem like a risk to their user base, and their dominating position in the social-networking world is anything but assured.


If this has made you paranoid about cookies, I wouldn’t be too concerned, just know that they’re there, and that they could reveal something about your online habits. If you’re uncomfortable with the idea, you could try disabling cookies and observing which of your favorite websites no longer work the way you’d like them to. You can then selectively allow sites you trust or can't live without to use cookies to restore useful functionality. (For what it’s worth, I allow full cookie access, and just know that whatever I might be doing online isn’t exactly private.)

There are also ways to browse for a single session or all sessions without storing any information on your computer. Search for “private browsing ” for more information.


If you want to opt-out of facebook’s tracking system, here are some articles which detail the steps required:


Here’s a nice article about Internet Cookies by Marshall Brain.

At the following pages, you can find out how to completely or selectively disable cookie activity, and delete some or all of the cookies already stored by your browsers:

Here is a non-profit site dedicated to the topic: It’s a bit out of date, but the information is good.

Here is a U.S. Government page Cookies: Leaving a Trail on the Web

. . . and here is the page by the non-profit digital rights organization The Electronic Frontier Foundation titled 4 Simple Changes to Stop Online Tracking.