Latest Entries »

Vinyl Records

Vinyl has become trendy again, and record pressing plants are pumping out as many new records as the plants can produce.  Some plants are even expanding.

Vinyl records have a mythology around them promulgated by audiophiles.  It is said that they are analog (they are), and thus more accurately reproduce the original audio than the digital “stair steps” (they don’t), and that, somehow music heard via vinyl is “purer” than digital music.  Almost exactly the opposite is true.

I hate to break it to you, but vinyl is a terrible medium for reproducing audio, and its various deficiencies require countermeasures that significantly change the audio.  Tom Scholz, the leader/recording engineer for the rock group Boston, supposedly tried to get the first Boston album recalled when he heard what his mixes sounded like on vinyl.  Tom Scholz’s experience aside, many of the countermeasures make changes to the audio that audiences can find pleasing.

These countermeasures were implemented in the process of “mastering”.  Originally, mastering was just creating a master disk, from which the pressing plates for the vinyl records would be made.  The mastering setup was simply a cutting lathe that created the sound groove in a metal plate.

One of the physical properties of a vinyl record is that the width of the groove is determined by the volume of bass frequencies.  When music started being recorded with electric bass, mastering engineers found they could often only get five or ten minutes of audio per side of a long-playing record, instead of the normal 15-20 minutes, because the grooves were too wide.  This resulted in them adding devices to the mastering setup to do compression and limiting on the bass frequencies.  The same measures are required for classical music with lots of tympani and/or low brass, and jazz with a prominent bass part.

Another issue with vinyl is that it does not reproduce high frequencies well, and midrange frequencies tend to be prominent.  Mastering engineers added equalization to their mastering setups to partially compensate, and recording engineers would often boost high frequencies in their mixes to help them be audible on the record.  Even with these measures, high frequencies on records gradually disappear toward the top of our hearing range.

The dynamic range of vinyl–the range in loudness from the quiet background hiss of the record to the loudest sound it can produce–is much smaller than that of our ears.  On vinyl it is about 70-80 db, while our ears have a range of about 120 dB.  Every 3 dB represents a doubling in loudness, so the extra range can be pretty important.  For music that goes from being quiet to very loud, it can exceed vinyl’s limits, so the quiet parts are buried in the background hiss.  To deal with this issue, vinyl mastering engineers compress the entire mix (as well as adding extra compression and limiting for the bass frequencies), which reduces the dynamic range.  This technique is used on all types of music, but it is most important on classical recordings because they often have wider dynamic ranges.

There are other, more arcane, measures taken in mastering, but many listeners find the ones I’ve described add a quality pleasing to the ear.  Overall compression makes it easier to hear all the parts, bass compression often makes the bass sound better, and the rolling off of high frequencies results in a sound many describe as “smooth” or “warm”.

At least part of the blame for the vinyl mythology has to do with a shortcut record companies took.  When Compact Discs first came out, the record companies believed that they didn’t need to do any mastering for digital because digital didn’t have vinyl’s limitations.  They sent the master tapes to CD manufacturers with no mastering, and the CDs that were produced did not sound anywhere near as good as vinyl.  They didn’t have any compression (or only what the recording engineer used), and because the high frequencies were boosted for vinyl, they sounded “harsh” or “tinny”.

These problems were caused by a lack of mastering, not, as audiophiles believed, an inherent flaw in digital audio technology.  It took a few years for the record companies and engineers to figure out that, in order to sound good, a similar mastering process was required for digital media.  CDs manufactured in the early 1980s often have these sonic problems, while later “remastered” versions mostly sound better (to my ears) than the vinyl, or at least more similar to the original master tape.

Today, great tools exist for mastering digital recordings, and pretty much every digital recording, whatever medium, gets mastered.  Mastering engineers have built on the vinyl techniques to create a large bag of tricks that make recordings sound better to listeners.  Over time, the ears of audiences have adjusted to being able to hear high frequencies without cringing, so they accept recordings where you can hear what the cymbals really sound like.  As a friend of mine who is a mastering engineer said to me yesterday, even an mp3, if it has a reasonable bit rate, will sound much closer to the original than vinyl will.

If you love the sound of vinyl, please enjoy it with my blessing.  Apart from the sonic aspects, I find the 15-20 minute album side a more satisfying chunk to listen to than a 3-minute mp3.  Just let go of the idea you are hearing what the recording engineer heard when he was mixing.

Now that I’ve rained pretty hard on the vinyl parade, do I have an alternative?  Is there a different technology that I think will serve listeners even better?  Stay tuned for Giving Good Audio for Music Part II: 24-bit Audio.

Many have taken the position that pure Net Neutrality is essential for an open Internet.  Today the FCC announced that they will not be requiring a pure Net Neutrality solution, but what they will require is not clear.  And, to quote Ross Perot, the devil is in the details.
Traditionally, on the Internet there has been the concept of “peering”.  This means that if AOL and Hotmail were sending each other a fairly balanced amount of traffic, they wouldn’t owe each other any money.  But if a site was sending a lot more traffic into your site than you were sending to it, that site would owe you “peering fees”.
Imagine this.  A small city builds a set of roads that is adequate for its normal traffic.  The normal traffic of its citizens travelling to other cities is balanced by citizens visiting from other cities.  At some point, another city starts sending a massive number of trucks into the small city, jamming the roads so the normal traffic can’t get through.  Traditionally on the Internet, the other city would help pay for the small city to widen and maintain its roads, since the other city is making money selling furniture (or whatever) to the citizens of the small city.
This system worked reasonably well when the “cities” were distinct in purpose; there were residential cities (access providers like AT&T and Comcast) and commercial cities (Netflix, Amazon, Google, etc.)  But now the residential cities want to be the providers of stuff as well, and they want to use the peering fees, and sanctions for not paying the peering fees, to disadvantage the commercial cities.  As a result, sites like Netflix want to stop paying peering fees.
Pure Net Neutrality advocates think we should require access providers never give preferential access to any site, nor charge any other site for the demands that its traffic put on their network.  That, in effect, means they must provide whatever level of bandwidth is required for any arbitrary application on the Internet.  This requirement seems overreaching to me.
When Netflix came online, the bandwidth at many access providers increased more than a thousand times what it was before.  Streaming movies have many orders of magnitude more data than email or normal websites like Facebook and Google.  And that was after YouTube had greatly increased the bandwidth people were using before that.  These increases required access providers to do massive upgrades to prevent the streaming movies from slowing down all the other traffic, and/or for them to restrict how much bandwidth Netflix and YouTube were using.  And Netflix is not the last Internet application that will require an increase in bandwidth.  I suspect that an understanding of these factors has caused the FCC to be uncomfortable with a pure Net Neutrality position.
That said, we need to do something.  For example, I have AT&T U-verse for my Internet access provider.  AT&T wants me to buy movies from them rather than getting them from Netflix.  They should not be able to use the fact that I get my access from them to disadvantage Netflix or other sites, but they will if they get the chance, as any competitive company would.  Netflix should help pay for the extra bandwidth, but they shouldn’t be taken advantage of.  I’m not sure there’s a good way for the FCC to balance this.
It’s a thorny problem.  I don’t think a naïve pure Net Neutrality approach is the right solution, but we need something.  A decent solution might be to re-regulate the former phone companies and other access providers, banning them from providing commercial services, but guaranteeing them a good rate of return.  I’m aware, however, that will never happen.

Crowdsource Me!

Many of my creative friends have been looking at raising money for a project using a crowdsourced funding site like Kickstarter or IndieGoGo.  I’ve been reading the advice available for them on the Internet, and having backed a number of projects on these sites, there is information I think would help them that I haven’t seen anywhere else.

Kickstarter and IndieGoGo, if you aren’t familiar with them, sound almost too good to be true.  You just put your project up there and thousands–perhaps millions–of dollars roll in through the magic of the Internet!

The truth is that this does happen to the projects with the greatest appeal and best presentation, but many projects get little or no money.  On Kickstarter, as of this writing, just 44% of projects reach their funding goal.  Their site requires that you meet your funding goal, or they never collect any money from those who backed your project and you get nothing. 56% of projects–that’s more than half–end up this way.  IndieGoGo allows you to create a project where you will get what is contributed (minus their service fee) even if you do not make your goal.  On IndieGoGo only 10% of projects meet their funding goal.  There are other crowdsourced funding sites, but these are the biggest ones.

I’ve looked at more than a hundred projects I considered backing at one level or another (I know a lot of creative people), and it’s clear to me that the creators of some of these projects don’t understand the thought process a backer goes through.  I’ve made some suggestions how to make your project attractive to backers below.

Of course, your project, whether it’s a CD or an electronic kit or a book, needs to be something that appeals to backers.  If it cannot capture people’s interest, no matter how professional your presentation, people will not back it.

  1. Do everything you can do without the money.  When I was involved in a startup company and talking to investors, this was the advice they gave me.  People who back projects think much the same way; they often expect you will get the project to a fairly complete stage.
    Before you put the project up on a funding site, you should, as much as possible, have all the difficult creative parts finished, leaving only the straightforward parts that can be done easily if you have money to get them done.  Of course some kinds of projects (a film, for instance) this may not be possible.
    Most successful CD projects have the music written, recorded and mixed, and the artwork completed so all that is left to do is manufacturing.  If you are putting out a book, try to have it written/drawn, edited, etc. and ready for the printer.  If you are doing a consumer product, try to have a run of prototypes already made and beta tested, and the final version quoted and lined up at the assembly plant.
    I’ve noted projects that ask for funds to finish the initial creative work often do not meet their funding goals, and when I’ve backed such projects, they more often do not deliver.  That said, some have been successful.
  2. Be specific.  If you are creating a CD, tell backers the names and lengths of all the songs.  If you are creating a book, tell them how many pages, the page size, and how many pages are in color if you have pictures.  If you are making a consumer product, give the exact specifications.  Backers want to understand what they are backing, and nothing will turn off a backer faster than vagueness in the description of your project.
  3. Know what you plan to do, and stick to it.  I recommend, before you create your project, to get a group of advisors.  These can be normal folks who you think might have an interest in backing your project and/or people who may have done their own projects.  Show them your proposed project before it goes public, and have them ask questions and make suggestions.  Try to anticipate any concerns your backers will have.  Make any changes necessary before you go public.  While you may add to the project for stretch goals (funding goals beyond your initial goal), do not change the core project.
    Once you start changing things, it will be hard to say “No,” to other requests for changes, and many backers will defer backing until they feel the definition of the project is final, which will reduce your ability to hit your funding goal by the end date.
  4. Have the first batch of backers lined up.  Nothing says “failing” to potential backers like a project with just one or two backers.  I recommend you get 20 or more people lined up ahead of time who agree to back your project on the day it goes public.  This gives your project an initial momentum that will inspire other backers.  Encourage your 20 initial backers to share the fact that are backing you on social media.
  5. Look at successful projects.  Get ideas from other projects that have worked well.  Here are a couple of my personal favorites.  Spock’s Beard offered all their older musical instruments as premiums that people could get for higher levels of backing.  Girl Genius had an inspired strategy for stretch goals.
    Spock’s Beard CD
    Girl Genius
  6. Read the rest of the advice.  There is lots of good advice out there about crowdsourced funding; I’ve just tried to cover some practical aspects that others are not covering.  Some good advice can be found at these links:
    Smart Blog
    Music Think Tank
    Young Entrepreneur Council

All-in-all, it is quite possible to create a successful crowdsourced funding project; it just takes a bit of thought and planning.  Good luck on your project!

Jobs, Jobs, Jobs!

No, I’m not talking about Steve.  This is an Op/Ed piece I wrote and submitted to the Los Angeles Times this week, when a guy much more well known than me will be giving a speech on the same topic.  I don’t begrudge the Times the fact they elected not to run it.  Lots of other folks are writing about the issue as well, and their dance card for this week might have been full.  But I advocate a different approach from what you have probably been hearing.  Here is the piece in its entirety.


As we await President Obama’s Jobs speech and Speaker Boehner’s address on the same topic a few days later, many of us are afraid we will, again, just hear the same tired positions.

President Obama is expected to echo liberal economists, who believe the best way to stimulate the economy and job growth is to put money in the pockets of people at the low end of the economic scale. This approach, they argue, works because lower income folks spend it immediately, having a variety of unmet needs. These economists often suggest extending unemployment benefits and reducing payroll taxes as a way to do this. Critics object to giving more money to people who are not working or not paying much in income taxes, because it has the unintended consequence of paying people not to work hard.

Speaker Boehner is expected to echo conservative economists who believe that the best approach is to reduce taxes on business and people at the high end of the economic scale. Advocates of this approach argue that lower taxes increase investments and allow businesses to expand, hiring more people. Critics of this approach argue that often the money is not invested in creating jobs, or is invested in creating jobs overseas, and that it takes a long time to have an effect in any case.

While both of these approaches can positively affect the economy, neither has an immediate and lasting effect of creating jobs. But there is a third approach no one seems to be talking about.

Thanks to Federal Reserve policies, banks and large corporations have unprecedented access to capital in the form of loans, at or near a 0% interest rate. These entities have been reluctant to get ahead of the economy, and have, for the most part, left the cash in the bank or used it for mergers and acquisitions. Mergers and acquisitions usually result in fewer jobs as operations are consolidated, not in adding new ones.

Existing small businesses that want to expand and entrepreneurs who want to start new businesses have not had the same access to capital. As current conditions cause banks to remain cautious, small businesses actually have less access to capital today, not more.  The vast majority of employers (99.7%, according to the Small Business Administration) are small businesses. They employ over half of all private sector employees, and have generated 64% of net new jobs over the past 15 years. Unlike larger firms, who are responsible for being careful with stockholders’ money, these businesses will take a risk, expanding and hiring in advance of economic growth. In short, they are exactly what the economy needs in order to start a robust job recovery, but they have no access to investment or loans that would let them do that.

Over the past few years there have been some modest expansions of the Small Business Administration, but the best way to accelerate a sustainable jobs recovery is to significantly expand their programs. In response to the current jobs crisis, the SBA should be guaranteeing more loans to small businesses, and it should start a program to work with banks for them to provide better access to business loans. Many banks do not participate in this program at all. It should expand its programs to underwrite loans to buy businesses and business real estate.

The SBA should also expand its MicroLoan program, which provides loans less than $50,000 to start micro businesses, and consider increasing the MicroLoan maximum to $100,000. The SBA should consider reviving its “Participating Securities” Small Business Investment Company program (investing in venture capital funds), or at least further expand its “Debenture” SBIC program, which will increase the pool of venture capital. These simple measures will directly spur immediate and sustainable job growth.

Some of the loans will not be paid back, as has always been the case. But many of them will, making this proposal likely have a lower impact on the federal budget than the expected plans from the president and the speaker, both of which will increase the deficit.

America has plenty of entrepreneurial spirit. There are four business incubators in the greater Los Angeles area and others in every major city in the country, filled with entrepreneurs eager to create the jobs of the future and employ the unemployed workers of today. Thousands of small businesses would employ more people if they could get a loan to expand. Tens of thousands of the unemployed want to start their own businesses. All we have to do is give them access to capital, and a larger Small Business Administration, at least until we are out of the woods, is our best vehicle to make that happen.

The First Shuttle Landing

STS-1 Landing at Edwards AFB

As we prepare for America’s very last shuttle mission, I thought I would share the story of my small role in the first shuttle space mission, STS-1.  (No, I didn’t get the word order wrong.  There were previous shuttle missions piggybacked on a modified 747 that did not go into space.)  Anyway, be warned.  This post is a little long.

In 1980, I arrived as an engineer at Edwards Air Force Base, working for Kentron International, the engineering services contractor for the base.  In college, I had wanted to study computer science, but at the time, almost no schools offered a degree in computer science.  I ended up studying electrical engineering with a computer science “area of specialization”.

In my interview for the job at Edwards, I talked about programming microprocessors, a skill I was sure they would be interested in.  The guy I interviewed with did not see things quite the same way.  It turned out they did almost no microprocessor work, doing most of their designs as large circuit boards covered with hundreds of logic chips.  The guy explained to me that microprocessors were a “fad”, which would quickly pass.  (!)  I got the job because I convinced him I could do circuit design as well.

About the time I finished my first microprocessor-based project for them (I never did any circuit design there), Edwards got the news that a large system they had ordered years before would be a year or two late in arriving.  This news caused more than a little panic, because the system was required for them to participate in the orbital portion of space shuttle missions.  Sure, the shuttle would still land at Edwards, but if they lost the orbital portion, it was not only humiliating; they also would lose a substantial amount of funds, much of which they had already spent preparing for the shuttle landing.  And they were rightly very proud of the accuracy of their two RCA AN/FPS-16 radars (16-foot diameter dishes) made during the golden age of radar in the early 1960s.  They calibrated these radars by bouncing a signal off the surface of the moon.

Previous space missions had been done “unplugged”, at least the tracking part.  Each site that did tracking of the spacecraft would watch the horizon at the point the spacecraft was expected to appear, and when it was supposed pop into view, the radar operators would madly search to find it.  When they found it, they would lock on with the radar, and it was automatic from that point.  The tracking data was recorded on a tape drive, and it was processed later.

This was to be the first space mission to use “continuous track”.  Live data would be sent over phone lines from a site tracking the shuttle to the next site it would pass over, and that site would slave their radars to the data to locate the shuttle before locking on.  That site would then send live data to the next site, and so on, providing continuous tracking data.  All the live data also went back to Houston so they could immediately see where the shuttle was.  The system that was going to be late did the slaving and data transmission that allowed them to do continuous track, as well as lots of other stuff.  They could do without the other stuff, but they needed the slaving and data transmission.

There was a very short time available to create a replacement.  In order to participate in the shuttle mission, a site had to succeed in a test of the continuous track system.  The test was to track a dead satellite continuously around the world, and the test was scheduled about 90 days after Edwards found out the system was going to be late.  My fellow engineers, who designed with logic chips, estimated they could do a replacement in nine months or so.

Being young and foolish, I spoke up and suggested a way we could do it in the time allotted.  There was an off-the-shelf computer system intended for industrial applications that could meet the requirements.  There were plug-in circuit boards from several vendors to do the different things we needed.  The other engineers were smart enough to understand that if someone failed at this task, he would almost certainly be fired because of the political weight of the issue.  I was naiive about the politics, and there was a consensus in our group that I should be the one to get the assignment.

I ordered the parts, put them together, wrote some assembly language code and started doing tests.  Everything worked, except sending and receiving data across the phone line.  We used an unusual mode of data transmission (synchronous, rather than the normal asynchronous), but I couldn’t get it to work no matter what I did.  As I countinued to beat on it without success, everyone got more and more nervous.

After a couple of weeks of this, my boss hired a consultant to come in and help me.  They did want him to get it up and running, but they also wanted him to tell them whether they should fire me right away.

The consultant and I got along well, and he eventually identified the problem in a place I had not thought to look.  It turned out that the plug-in board  I bought to do the data transmission had a design flaw that made it work fine for asynchronous data, but not work for synchronous.  I cut a few traces on the circuit board with an X-acto knife and soldered on a few wires to correct the problem, and everything was running just as it should.  The consultant gave a very positive report on me, and later tried to hire me.

Edwards participated in the test with the dead satellite, with me at the radar all night as the test continued, “just in case”.  A couple of sites failed the test (not Edwards), and they did another test a week or two later, again with me standing by in the radar all night.

About ten days before the shuttle landed in April of 1981, my boss told me I needed to attend the base commander’s staff meeting.  Once again, being young and foolish, I thought maybe the commander would thank me for all the all-nighters I put in to get the project done.  For the entire staff meeting (an hour and a half) no one even glanced at me.  When the meeting was over, the commander peered at me and asked, “You Gloster?”  I nodded.  He said, “I need you to be at the radar for the whole shuttle mission, understand?”  I wasn’t quite sure what to say, so I nodded.  He said, “That is all”, picked up his notes and walked out of the conference room.

When I got back to the office, I asked my boss, “He can’t really do that can he?  I’m a civilian, not an airman, and not even a civil servant.”  My boss said, “Don’t make an issue out of it.  Just do it.  I’ll give you some comp time later.”

So, as America watched Cape Kennedy prepare for the launch, I was holed up in the tiny cinderblock building that housed one of the FPS-16 radars, where I remained for three days with my sleeping bag, as radar operators went in and out for their shifts.  Nothing went wrong.  If it had, I had a spare unit I had built, but I’m not sure I could have done anything other than swapping the unit out.

Two days later, when the shuttle landed, I was a little bleary.  Sleeping a couple of nights on a concrete floor behind humming racks of equipment while you are excited and a little worried doesn’t really give you quality rest.  But the radars are on a hill overlooking the lakebed, and watching the landing from there gave me the best view of anyone.  Getting the best seat in the house for that historic event made all the project’s late nights, political undercurrents and difficulties worth it.

Edwards AFB

FacebookI like Facebook.  It lets me stay in touch with people I like, with whom, for a reason of distance or other barriers, I would nornally lose contact.  It also lets me publish the occasional bon mot (which, being realistic for a minute, some of my friends probably block) or tell people about events in my life.

Of course, like any good thing, there are bad aspects also.  My personal peeve has been Facebook chain letters (you know, posts of the form “If you have any tiny vestige of patriotism/humanity you will put the following as your status for just 1 day/48 hours”), but recently I became aware of an even greater evil.

I’ve always been a bit suspicious of Facebook applications.  I blocked Farmville my first few days on Facebook.  I don’t have time for it, and I’m not sure why, but I find people giving me random Farmville objects strangely annoying.  Ditto for other Facebook games.  But two Facebook applications have recently tempted me.

A friend of mine uses NetworkedBlogs to send notifications of her blog posts to people on Facebook, and I play in a progressive rock band that wants to use Profile Pages for Musicians to promote the band.  Just out of curiousity, I clicked on the invite to Profile Pages for Musicians to see what I would allow if I accepted.  Below is what it showed me (with my email address removed).  They can:

  • Access my basic information
    Includes name, profile picture, gender, networks, user ID, list of friends, and any other information I’ve shared with everyone.
  • Send me email
    Band Profile: Profile Pages for Musicians may email me directly at <insert your email address here>
  • Post to my Wall
    Band Profile: Profile Pages for Musicians may post status messages, notes, photos, and videos to my Wall
  • Access posts in my News Feed
  • Access my data any time
    Band Profile: Profile Pages for Musicians may access my data when I’m not using the application
  • Manage my pages
    Band Profile: Profile Pages for Musicians may login as one of my Pages
  • Access my profile information
    Birthday and Hometown
I may be just showing my age, but I’m a little bit horrified that by accepting the invitation to the application, I am:
  • Providing tons of information about me, including stuff I only let my friends see
  • Providing a list of who my friends are
  • Giving them access to see all the posts by my friends (who may have privacy settings that are supposed to prevent this)
  • Giving them access to my news feeds, so they can see what my interests are and what stuff I “like”
  • Giving them my email address and allowing them to spam me
  • Letting them post stuff to my wall (which gets around my friends trying to block the application)
  • Letting them look up my information even if I am not using their application
  • And even letting them manage my Facebook pages!  

Note that I have my privacy settings moderately strict, so others may allow even greater access by accepting the invitation.  Facebook trusts those who provide the applications to act responsibly (in compliance with a vague policy), and has kicked out applications that do the most egregious violations (like posting blatant ads to people’s friends’ walls).  But Facebook does nothing to prevent the application from quietly gathering lots of data as long as it doesn’t do anything obvious that upsets users, and has no measures to enforce its rules other than kicking the application out after the fact.

I absolutely hate the idea that I am becoming an old Luddite curmudgeon, but, if I am honest, I will not be joining these or other Facebook applications, and you might think about adopting the same policy.

Zappa Perfected

Music Box Theatre

Music Box Theatre

Friday night Kathy and I went to see Dweezil Zappa Plays Zappa at the Music Box theatre on Hollywood Boulevard in Hollywood.  In case you haven’t heard about what Dweezil is doing, he’s put together a band of crack musicians (not musicians on crack), who perform the music of Frank Zappa, just as written.  

You may have heard of Frank Zappa, and know a lyric line from one of his songs (probably “Titties and Beer” or “Valley Girl”), and dismissed him as merely a scribbler of profane lyrics.  He was much more than that.  For one thing, he was a great composer.  Don’t take my word for it; the classical composer and noted conductor Pierre Boulez says so, and he conducted some of Zappa’s classical works, which were composed near the end of Zappa’s life. 

Aside from his classical works, the music his various bands performed has been studied and admired by generations of musicians.  Zappa was famous for putting together some of the best musicians on the planet, and challenging them with compositions that pushed them to their limits.  His songs often contained musical jokes, where he took a recognizable riff from a popular artist and had fun with it.  Plus he was one heck of a satirist and storyteller.

Dweezil’s band focuses on the music Frank recorded with his various bands.  He had the challenge of putting together a band that could actually play Frank’s music, and of learning the ridiculously hard guitar parts himself.  He had the benefit of a catalog of music spanning three decades, with many die-hard fans and pent-up demand to hear the music performed again.  To put it in business terms, Dweezil had a strong “brand”, but when a new person tries to carry a brand forward, there is always a danger that the brand may be seen as cheapened.

Frank plays with the band during sound check

Frank plays with the band during sound check

I had seen Dweezil and the Zappa Plays Zappa band twice before, and he has always done a credible job with the music.  He always looked great (he bears a strong resemblance to David Krumholz of Numb3rs fame).  His parts were well-executed, and the rest of the band is amazing.  Most of the same players have been playing with him for years.  The arrangements were flawlessly executed, and they achieved the technological feat of having Frank make a few appearances on a video screen during the show to play and sing along with the band.  I always left feeling like I got a very good and satisfying show, but I thought that Dweezil didn’t quite measure up to Frank in his solos.  That’s changed this time around.  Dweezil, while not the same person as his father, was certainly at the same level in his playing, and Friday had one of the better guitar virtuoso performances I’ve seen.

George Duke during sound check

George Duke during sound check

Zappa Plays Zappa often has a musician who played with Frank as special guest, and the guest plays a few songs with the band.  Friday, it was the amazing keyboard player George Duke.  While many Zappa fans might also know him for his work with people like Billy Cobham and Stanley Clarke, George Duke had even greater success as a solo recording artist creating great R&B funk records.  He is also a successful record producer. 

We got tickets that let us watch the sound check before the show.  It was a lot of fun to watch the band interact and figure out last-minute changes to the arrangements.  George Duke was also there playing and getting his parts integrated.  At one point he was really wailing during a solo, and a couple of the band members took out their cameras and took a picture of him.  They were clearly fans.  Interestingly, George has all of his keyboards painted flat grey, so he does not advertize what kind of keyboard he is using.  He’s had that policy for more than 30 years.

Music Box Theatre wallpaper

Music Box Theatre wallpaper

The Music Box is an old Art Deco theatre.  Inside the theatre, the walls have 40 foot high wallpaper that displays a famous image from the Hieronymus Bosch painting “Garden of Earthly Delights”, painted about 1490.  It bore a certain resemblance to some of the surreal Zappa album covers.

Downstairs, the theatre has no seats on the main floor (there are booths on the side, but they offer a poor view of the stage).  We opted to go up to the balcony, where we could get a seat, and have a great view of the stage.  We managed to get the first row of balcony seating in the center section (the seating was not assigned).

The show was great from start to finish, and the balcony was the place to watch from.  After opening with Gumbo Variations, they played all the songs from the Apostrophe album in the same sequence as the album.  Released in 1974, it has always been my favorite Zappa album.  Frank got the opportunity to record with lots of top musicians on that record, and it was a creative high point for him.  Getting to hear George Duke play and sing live on Uncle Remus, which he co-wrote with Zappa, was a real treat.  Cosmik Debris had Frank on video doing the vocals.

After that, they gave us another 8 songs, including RDNZL, Pygmy Twylyte, Inca Roads and City of Tiny Lights.  George Duke played with them on a number of these.  The encore included Baby Snakes, Chrissy Puked Twice (AKA Titties and Beer), and the Muffin Man, with Frank playing guitar on the final tune.

Leaving the show it occurred to me that Dweezil has achieved something his father never quite managed.  Frank’s bands, several of which I got to see, had great musicians, but they were always experimenting to one degree or another.  Dweezil has managed to take his band and the music into a more consistent and polished state, which is great for audiences.  You really owe it to yourself to catch this great band at least once.

Set list for the show I saw
Tour Dates

Zappa Plays Zappa On Stage

Zappa Plays Zappa On Stage

Jon Anderson Solo

This past Wednesday, Kathy and I got to see Jon Anderson, normally the vocalist for Yes, perform a solo show.

Jon was scheduled to be on the last two Yes tours, but due to two severe asthma attacks and acute respiratory failure, he was unable to be on either tour.  (Instead they brought along a singer who has performed with a popular Yes tribute band.)  Doing better now, Jon recently did a tour of the UK with Rick Wakeman, and is now doing a solo tour of the U.S. and Canada.

Orpheum Theatre

Orpheum Theatre

The show was at the Orpheum Theatre, on Broadway in downtown Los Angeles.  It is a magnificent old Art Deco building, and often is used for small tours of progressive rock folks.  We recently saw a Keith Emerson and Greg Lake (but no drummer) tour there.

Jon has a positive energy, despite his health challenges, that is palpable when he is on stage.  I have no doubt that the name Yes was his idea.

Jon got to play his versions of many Yes songs, as well as a few non-Yes songs.  He mainly played acoustic guitar, but he also played a bit of dulcimer and piano to accompany his singing.

When he sang the Yes material, the vocals were the same glorious vocals we hear on the albums, but the chords he played were COMPLETELY DIFFERENT!  At one point, he explained that he was playing the songs “as I originally wrote them”. 

This led me to imagine the Yes recording process starting with Jon recording his acoustic guitar and vocals, and then the band replacing his guitar with completely different music.  This was a bit of a revelation to me, as when I have played Yes music, the relationship between the music and the vocal part is not always obvious.  It makes sense that  they were not necessarily written by the same person.

Jon Anderson on stage

Jon Anderson on stage

Jon was relaxed and quite entertaining.  His vocals sounded great, and he told some fun stories.  One that I remember was about Yes doing a worldwide tour after Owner of a Lonely Heart became a big hit.  They played in Brazil for a huge crowd, and their next performance was in Argentina.  But just a few months before, Britain had been at war with Argentina over the Falkland Islands.  It turned out that Yes were the first British band to go to Argentina after the Falklands business, and there were death threats, saying that they were going to shoot someone.  At that point, Chris Squire (bass player in Yes) told Jon, “Well, you’re out front, so I guess you’re the one who’ll get shot.”  Jon reported that they played the gig and no one got shot, but that he moved around a LOT.

The part I think Kathy enjoyed most was when Jon sang a song he and Vangelis Papathanasou (yes, that Vangelis) wrote together called State of Independence.   Chrissy Hynde also did a version of the song that Kathy is partial to.

All in all, it was a fun evening and a good and revelatory show.

Netduino vs Arduino


The Netduino and Arduino are inexpensive (about $30-$35) small single-board computers that have allowed lots of regular people to create devices containing an embedded computer.  If you’ve never heard of them, you probably don’t care about the rest of this article.

I recently got one of the new Netduinos, and have been playing with it.  I’d previously done half a dozen Arduino projects, so I was interested in the differences.  I have to say, I was very impressed with it, but there are differences you should know about before you jump into using a Netduino.

Before We Even Start

The slugline for the Netduino is that it is like an Arduino, only using C# and .NET for programming.  That’s accurate, but there’s more to it.  Current Arduinos have the ATmega328, or its USB-supporting cousin, the ATMega8U2.  These are fairly simple 8-bit processors, running at 16MHz.  The Netduino has an Atmel 32-bit ARM7 processor running at 48MHz, similar to the processor in many laptop computers.  It has a much larger program space (128K, not including the .NET runtime, vs. 32K for everything on Arduino), and a much larger RAM space (60K vs. 2K).  The Netduino itself, the schematic, layout and code, is entirely open-source.

First Look

The Netduino board is the same size and shape as an Arduino board.  It has the same sockets for shields labled the same way, the same power connector, and a USB connector.  The USB connector is the mini size like many cell phones use, rather than the full size one on the Arduino Duemilanove and Uno.  (This is an improvement, as shields rest dangerously on top of the metal USB connector.)  The USB connector is in the center of that end of the board rather than on the left edge.  Like the Arduino, the Netduino has a reset button in the same spot, and a power LED (bright white), and an LED on digital output 13 (bright blue) in different board locations than on the Arduino.  There is a place to install a 6-pin header at the back of the board, though no header is installed, in the same spot Arduinos have a similar header.  The TX and RX monitor LEDs that Arduinos have do not exist on the Netduino.

Development Environment

First, as you might expect, the development environment only runs under Windows.  It requires Vista or Windows 7.  Like the Arduino, you can set up a complete development environment for free.  Unlike the Arduino, it is not all open source, and in order to be legitimate, you will need to register for one component.  There are three components you need to install (one is open-source), but once you do that, it works well.  You don’t even need to install a device driver (this is done as part of the other installs).  You will be working in the Visual Studio 2010 environment, which is pretty bug-free and easy to use, once you get used to it. 

Something I don’t hear mentioned is that this setup provides a far superior debugging environment.  You can do both emulation and in-circuit debugging, unlike the Arduino environment, which currently doesn’t do either.  When I told Visual Studio to debug my program, it downloaded my code onto the board and started running it.  I was greatly surprised that when I clicked next to a line of code to set a breakpoint, the code running on the board immediately stopped at the breakpoint, and I could single-step through it, then set other breakpoints and proceed.

I have been programming the Netduino in C#, which is similar to Java in many ways, but you may be able to use Visual Basic as well.  Once I got used to doing embedded development in it, I liked C# better than the Arduino language.  The Arduino language is a simplified version of C, but almost anyone who uses it ends up needing regular C constructs (like sizeof()), so you get code that is a mix of Arduino and C.  C#, like Java, has many constructs that make the code more elegant and easy to read than C.  And the .NET micro library is more extensive for some functions than the Arduino standard library.  Also, C# delegates are a much cleaner way of setting up handlers for events, a lot of what your code likely will be doing.

A Drop-In Replacement?

The Netduino is not a drop-in replacement.  If you will only be doing digital I/O at low current, you probably can get away with using it that way, but there are a variety of differences you need to be aware of.  Some of these differences may make it a better fit, and some of them may make it a worse fit.  In any case, you don’t want to plug a Danger Shield (for example) into it and turn it on (analog voltages are too high).

Category Difference
 Chip power  Internally, the CPU runs at 3.3V, not 5V like the Arduino, though it uses the same power sources
 Digital I/Os  Go from 0V to 3.3V, not 5V.  It will work with most 5V logic circuits, input and output.
 Analog Inputs  Must not go higher than 3.3V!
 PWM Outputs  PWM is often used like an analog output.   Since 100% averages to 3.3V instead of 5V, circuits may work differently
 Libraries  None of the Arduino libraries, which are C and C++ code, will work on the Netduino without modification.  If you use a board-specific library, you may have to rewrite it.
USB Connector  Uses cell-phone type mini USB connector
 I/O Current  The pins on the CPU can drive a maximum of 8mA of current, which is less than Arduino
 CPU  32-bit Atmel ARM, instead of 8-bit ATmega
 Speed  48MHz instead of 16MHz
 Program Memory  128K instead of 32K
 RAM  60K instead of 2K
 EEPROM  Netduino has none
 In-circuit debugging  Netduino has it
 Emulation  Netduino development environment has it
 Price  As of this writing, while the Arduino Uno has a street price of about $30, the Netduino goes for about $35

Beyond Netduino

Something interesting you will find if you look at the schematic of the Netduino is that a lot of processor pins aren’t connected to anything!  The processor has a lot more I/O capability than it can connect up through the standard Arduino footprint.  For that reason, the Netduino guys are working on the Netduino Plus.  It still has the Arduino footprint, but on the Netduino Plus board, they have an ethernet connector and a micro SD card slot.  (It suddenly becomes clear why they moved the USB connector.)  As of this writing, the Netduino Plus is in beta, and not generally available.

If that is not enough for you, there are currently 21 separate development boards you can buy that are based on the .NET micro framework.  Most are available from Mouser. 
.NET Micro Framework Hardware


If you want to write a more serious program that is larger, requires a faster processor and you want a better debugging environment, the Netduino has a lot to recommend it, and a variety of options if you outgrow it.  If you want maximum compatibility with existing Arduino shields and libraries, the Netduino may not be your best option.

Getting Started

Here are some links to get you started on Netduino:

Netduino Site
Netduino Getting Started PDF
Atmel Microcontroller Data
Atmel Microcontroller Full Datasheet
Netduino Schematic
Netduino Forums

Development Software
Microsoft Visual C# Express 2010
.NET Micro Framework SDK v4.1
Netduino SDK v4.1 (32-bit)
Netduino SDK v4.1 (64-bit)
.NET Micro Framework Reference


One piece of information that I missed including in my original post is that the USB port works a bit differently on the Netduino from how it does on the Arduino.

The Arduino lets you treat the USB port as a simple serial port, and it is very easy to write code that communicates across it. The Arduino has sorted out how to differentiate between the communication to download a new program and normal communication, and for most applications, it just works the way you want it to.

The Netduino works differently. The much more complicated communications that allow you to run debug commands and break into a running program do not allow this. You can recompile the download package without the debug monitor, which should allow you to do this (I have not tried it), but it is more trouble than working with Arduino for applications where this is important. That said, working with an in-circuit debugger is pretty useful if your code is longer than a few lines of code.

The Netduino Plus has become available in the interim (about $60), and the addition of an Ethernet port and a microSD card slot on the same size board make it appropriate for a broader range of applications. You can get free shipping if you buy it from Secret Labs through the Amazon storefront.

Addendum #2

It’s been great to see all the response to this article.  Here are some additional book resources you might be interested in.

Expert .NET Micro Framework  A couple of years old (2009), with nothing specifically about the Netduino, but a very thorough exploration of software development and the framework on similar devices.

Embedded Programming with the Microsoft .NET Micro Framework  Even older (2007) this is Microsoft’s official book on the subject.

Getting Started With Netduino  This book is not quite out yet as I write this.  It is Make Magazine’s book on the Netduino.  It looks to be less deeply technical than the other books, more hobbyist-friendly, and is geared specifically at the Netduino with examples you can do right away.


If you have a significant geek factor, you may have more than one computer in a room at home.  Sometimes you have your old computer plus your new computer, or your home computer plus your laptop from work, or a large stack of machines tracing your computer history over the last decade.

If you find yourself in this situation, you might find a use for a device I have never seen in any computer store or swap meet.  Fortunately, with very minimal soldering skill, you can build it in an evening very cheaply.

The problem this solves is what to do with the audio from both (or all) of those computers.  With this computer audio mixer, you can use one set of powered speakers and have the audio from all of your machines come through them. 

Note: this only works for powered speakers.  The mixer does not work for unpowered speakers.

For my setup, I decided to have four inputs, but you can use the same approach for however many inputs you need.  Here’s the schematic:

Here’s what the circuit board looks like assembled:


 You can use either 1/4 watt or 1/8 watt resistors.  Here’s what the board looks like from the other side, with the locations of resistors shown:


 Here it is built into a box:

 I used some parts I had around the house, but you can build it from the following parts from Radio Shack:

Name Part Number Quantity
Proto Board 276-158 1
10K Resistors 271-1335 2
1/8 inch Stereo Jack 274-246 5
Box 270-1805 1
1/8 inch Stereo Cable 42-2387 4

Just use the stereo cables to connect the speaker outputs of your computers to the inputs of the box.  Then plug the powered speakers into the output.