Category: Technology


Vinyl Records

Vinyl has become trendy again, and record pressing plants are pumping out as many new records as the plants can produce.  Some plants are even expanding.

Vinyl records have a mythology around them promulgated by audiophiles.  It is said that they are analog (they are), and thus more accurately reproduce the original audio than the digital “stair steps” (they don’t), and that, somehow music heard via vinyl is “purer” than digital music.  Almost exactly the opposite is true.

I hate to break it to you, but vinyl is a terrible medium for reproducing audio, and its various deficiencies require countermeasures that significantly change the audio.  Tom Scholz, the leader/recording engineer for the rock group Boston, supposedly tried to get the first Boston album recalled when he heard what his mixes sounded like on vinyl.  Tom Scholz’s experience aside, many of the countermeasures make changes to the audio that audiences can find pleasing.

These countermeasures were implemented in the process of “mastering”.  Originally, mastering was just creating a master disk, from which the pressing plates for the vinyl records would be made.  The mastering setup was simply a cutting lathe that created the sound groove in a metal plate.

One of the physical properties of a vinyl record is that the width of the groove is determined by the volume of bass frequencies.  When music started being recorded with electric bass, mastering engineers found they could often only get five or ten minutes of audio per side of a long-playing record, instead of the normal 15-20 minutes, because the grooves were too wide.  This resulted in them adding devices to the mastering setup to do compression and limiting on the bass frequencies.  The same measures are required for classical music with lots of tympani and/or low brass, and jazz with a prominent bass part.

Another issue with vinyl is that it does not reproduce high frequencies well, and midrange frequencies tend to be prominent.  Mastering engineers added equalization to their mastering setups to partially compensate, and recording engineers would often boost high frequencies in their mixes to help them be audible on the record.  Even with these measures, high frequencies on records gradually disappear toward the top of our hearing range.

The dynamic range of vinyl–the range in loudness from the quiet background hiss of the record to the loudest sound it can produce–is much smaller than that of our ears.  On vinyl it is about 70-80 db, while our ears have a range of about 120 dB.  Every 3 dB represents a doubling in loudness, so the extra range can be pretty important.  For music that goes from being quiet to very loud, it can exceed vinyl’s limits, so the quiet parts are buried in the background hiss.  To deal with this issue, vinyl mastering engineers compress the entire mix (as well as adding extra compression and limiting for the bass frequencies), which reduces the dynamic range.  This technique is used on all types of music, but it is most important on classical recordings because they often have wider dynamic ranges.

There are other, more arcane, measures taken in mastering, but many listeners find the ones I’ve described add a quality pleasing to the ear.  Overall compression makes it easier to hear all the parts, bass compression often makes the bass sound better, and the rolling off of high frequencies results in a sound many describe as “smooth” or “warm”.

At least part of the blame for the vinyl mythology has to do with a shortcut record companies took.  When Compact Discs first came out, the record companies believed that they didn’t need to do any mastering for digital because digital didn’t have vinyl’s limitations.  They sent the master tapes to CD manufacturers with no mastering, and the CDs that were produced did not sound anywhere near as good as vinyl.  They didn’t have any compression (or only what the recording engineer used), and because the high frequencies were boosted for vinyl, they sounded “harsh” or “tinny”.

These problems were caused by a lack of mastering, not, as audiophiles believed, an inherent flaw in digital audio technology.  It took a few years for the record companies and engineers to figure out that, in order to sound good, a similar mastering process was required for digital media.  CDs manufactured in the early 1980s often have these sonic problems, while later “remastered” versions mostly sound better (to my ears) than the vinyl, or at least more similar to the original master tape.

Today, great tools exist for mastering digital recordings, and pretty much every digital recording, whatever medium, gets mastered.  Mastering engineers have built on the vinyl techniques to create a large bag of tricks that make recordings sound better to listeners.  Over time, the ears of audiences have adjusted to being able to hear high frequencies without cringing, so they accept recordings where you can hear what the cymbals really sound like.  As a friend of mine who is a mastering engineer said to me yesterday, even an mp3, if it has a reasonable bit rate, will sound much closer to the original than vinyl will.

If you love the sound of vinyl, please enjoy it with my blessing.  Apart from the sonic aspects, I find the 15-20 minute album side a more satisfying chunk to listen to than a 3-minute mp3.  Just let go of the idea you are hearing what the recording engineer heard when he was mixing.

Now that I’ve rained pretty hard on the vinyl parade, do I have an alternative?  Is there a different technology that I think will serve listeners even better?  Stay tuned for Giving Good Audio for Music Part II: 24-bit Audio.

Many have taken the position that pure Net Neutrality is essential for an open Internet.  Today the FCC announced that they will not be requiring a pure Net Neutrality solution, but what they will require is not clear.  And, to quote Ross Perot, the devil is in the details.
Traditionally, on the Internet there has been the concept of “peering”.  This means that if AOL and Hotmail were sending each other a fairly balanced amount of traffic, they wouldn’t owe each other any money.  But if a site was sending a lot more traffic into your site than you were sending to it, that site would owe you “peering fees”.
Imagine this.  A small city builds a set of roads that is adequate for its normal traffic.  The normal traffic of its citizens travelling to other cities is balanced by citizens visiting from other cities.  At some point, another city starts sending a massive number of trucks into the small city, jamming the roads so the normal traffic can’t get through.  Traditionally on the Internet, the other city would help pay for the small city to widen and maintain its roads, since the other city is making money selling furniture (or whatever) to the citizens of the small city.
This system worked reasonably well when the “cities” were distinct in purpose; there were residential cities (access providers like AT&T and Comcast) and commercial cities (Netflix, Amazon, Google, etc.)  But now the residential cities want to be the providers of stuff as well, and they want to use the peering fees, and sanctions for not paying the peering fees, to disadvantage the commercial cities.  As a result, sites like Netflix want to stop paying peering fees.
Pure Net Neutrality advocates think we should require access providers never give preferential access to any site, nor charge any other site for the demands that its traffic put on their network.  That, in effect, means they must provide whatever level of bandwidth is required for any arbitrary application on the Internet.  This requirement seems overreaching to me.
When Netflix came online, the bandwidth at many access providers increased more than a thousand times what it was before.  Streaming movies have many orders of magnitude more data than email or normal websites like Facebook and Google.  And that was after YouTube had greatly increased the bandwidth people were using before that.  These increases required access providers to do massive upgrades to prevent the streaming movies from slowing down all the other traffic, and/or for them to restrict how much bandwidth Netflix and YouTube were using.  And Netflix is not the last Internet application that will require an increase in bandwidth.  I suspect that an understanding of these factors has caused the FCC to be uncomfortable with a pure Net Neutrality position.
That said, we need to do something.  For example, I have AT&T U-verse for my Internet access provider.  AT&T wants me to buy movies from them rather than getting them from Netflix.  They should not be able to use the fact that I get my access from them to disadvantage Netflix or other sites, but they will if they get the chance, as any competitive company would.  Netflix should help pay for the extra bandwidth, but they shouldn’t be taken advantage of.  I’m not sure there’s a good way for the FCC to balance this.
It’s a thorny problem.  I don’t think a naïve pure Net Neutrality approach is the right solution, but we need something.  A decent solution might be to re-regulate the former phone companies and other access providers, banning them from providing commercial services, but guaranteeing them a good rate of return.  I’m aware, however, that will never happen.

In the 1970s, the first personal computers did not seem to be very important.  Arguably, it took far more time and energy to get them to do something than was ever saved by using them.  Nonetheless, tinkerers all over the place talked about how important they were going to be. 

By the mid-1980s, they actually started being useful, and by the 1990s, they had begun transforming our lives.  Secretarial pools, travel agents, newspaper classified ads, and letters sent through postal mail have largely become anacronisms, and the technology has changed almost every area of our life.

But when we look at popular technologies that have come along since, they have been evolutionary, not revolutionary, despite what the ads say.  iPods, iPhones and iPads have just made some functions of personal computers available when you are not in front of a traditional computer.

But this week I got to play with a technology I believe may be truly transformative.  The device in question is called a 3D printer.  Calling it a printer is a bit misleading.  It creates three dimensional objects, in the case of the one I played with, out of ABS plastic, from a 3D model downloaded to it from a computer. 

Professional 3D printers exist, but they are expensive.  They start somewhere around $25-$30K.  But some NYC hackers created a kit to build one for less than $1000, popularly known as a MakerBot (the actual name is the Cupcake; MakerBot Industries is the name of their company), and they’ve been selling out each production run months ahead of time for the last year.  That’s what I got to play with.  You can see an object I printed below:

 MakerBot CS logo

The triangle is the actual object, the logo of Crash Space,
a hackerspace in Los Angeles. The frame underneath is
called a “raft” and is there to prevent curling as it cools.
The raft is peeled off and discarded.

The Cupcake is a machine you do a fair amount of futzing with in order to get it tuned in perfectly.  Once you do that, it runs well, but getting there can take a bit of work.  And the parts it creates sometimes are less polished than ones molded in a factory.  But if the technology evolves over the next decade in any way similar to how personal computers did in the late 1970s and early 1980s, the world will be a different place.  Here’s an example:

Twelve-year-old Julie buys a new cell phone.  They ship her the guts of it (a circuit board with a display and keyboard attached), and expect her to get the case for it separately.  She looks online, and finds a design she likes.  She edits the design, adding her name and a butterfly embossed on it.  She pays a license fee for the design, and then either prints out the case on the family 3D printer,  or goes to Ginko’s Copy Shop and has them print it for her.  She prints it in bright pink plastic that matches her room. 

Meanwhile, a fitting has broken on the dishwasher.  Her father dowloads the part data from the appliance manufacturer’s website and prints out the part.  Julie’s birthday party is the following weekend, and her mother prints out personalized party favors for the party shaped like butterflies (Julie likes butterflies). 

Does this sound fantastic?  Well, a MakerBot 3D printer was used about a month ago to create a replacement part for a dishwasher, though the manufacturer did not have a 3D model or even the plans available online.  At least for MakerBot tinkerers, this vision of the future is already becoming a reality.

I’m currently designing brackets to mount my cell phone and iPod in the car, which I’m hoping to print out soon (yes, I do know how geeky this sounds).  Meanwhile, the MakerBot folks announced the successor to the Cupcake this week, called the Think-O-Matic.  It can print slightly larger objects, has more accuracy, and has a small conveyor belt to move completed objects off the printing surface, so printing the next object can continue uninterrupted.  If you are interested in learning more, you can read their press release or watch a MakerBot in action.  Get ready for the future, here it comes!

 

Thing-O-Matic