Capital-M Music

Studio Guide

by Richard Bruner

I’ve put together a guide with some things to think about when building out your personal or professional music studio. This is a major task for a composer, producer or recording artist / engineer, and will be something you’ll work on for pretty much the rest of your life as a musician. I’ve got several components to this guide: a description of several types of music studios that you might want to build, some thoughts on how to get started with this process, then a description of the different types of equipment and software you will need to get at some point along with what I use for those categories at the moment, and finally some extra mini-guides at the end that describe some of the more common synthesis techniques and a quick crash course in MIDI. Both of these are very big topics that people have written thick books about, so I can only cover the basics here, but I’ll recommend some books if you want to follow up for more information.

This guide goes along with two other guides I’ve put together - one for my overall musical philosophy, “Capital-M Music”, and a second “Life Tips” guide that relates some of my experience in the wide world of Music in areas ranging from being a gigging performer, to composition, to general musicianship. I will also cross reference these other documents below as relevant to what I’m discussing here.

Studio Categories

Starting a Studio

Equipment Descriptions: Hardware

Equipment Descriptions: Software

Equipment Descriptions: Extras

Equipment Descriptions: iPad (both for sheet music and in the studio)

Bonus Guide: Synthesis Techniques

Bonus Guide: MIDI

First, let me give a quick breakdown of common studio types. Many studios combine some or all of these elements and are what you might call project studios, usually owned by a single person and used primarily for their projects, either personal or commercial (for clients). Many of these are at the person’s house, so they can also be called home studios, but either way, it’s still useful to think about types of studios.

Studio Categories

Composition Studio: A studio whose primary output will be sheet music, either for a recording session or for live performance in a concert music environment or a “band” environment. The primary tool will be notation software, and all you really need is a laptop or even an iPad and notation software. It would also be useful to have some type of MIDI keyboard, and for your main desktop setup I’d recommend at least 61-keys. Audio output will be secondary, just to get a basic mockup or to check for errors in the notes and rhythms you write. This setup will be used by concert composers, arrangers, and by music copyists, whether for concert music or media projects.

Electronic Production Studio: A studio whose primary output will be an audio file. Often entirely electronic, this is a studio for various forms of electronic dance music, but also for making higher-end mockups for film projects (some composers hire people to do that for them), and with a simple audio interface you can record some live elements as well for pop songs and other similar styles. The primary tool will be the DAW (digital audio workstation - a computer program for recording audio and MIDI), and you will need some kind of MIDI input device, probably a keyboard. Again, at least 61 keys for your main setup, and 25 or 37 for a portable setup. Another popular midi device for this kind of studio is some sort of beat controller, like an AKAI MPC or Native Instruments Maschine, particularly for electronic dance beat production or hip-hop beat production. You will also have more need for an extensive computer-based or external synthesizer collection, and this is probably where the bulk of your ongoing spending will be. The playback environment is also more critical here, as you will want to be able to mix your projects effectively, so studio monitor speakers will be an element you’ll need to consider, along with ideally getting your mixing environment acoustically treated. Most likely to be used by a composer who calls themselves a “producer”, or by a mock-up artist.

Film Scoring Studio (or today perhaps more accurately “media composition studio”, as most commercial composers work in several types of media including film and tv but also video game scores and commercials (tv or internet advertising)): Combines both of the above, and also adds the ability to score to picture. Most of the main DAW programs and even most of the main notation programs offer extensive support for scoring to picture now, so it’s more a matter of learning how to do that than it is a matter of being able to do that. Familiarity with stylistic tropes and a sense of how dramatic music works (meaning music in drama, not necessarily big and bombastic though it can be!) will be more critical here, and that’s a whole other area of training outside the scope of my Capital-M Music project. Most likely to be used by a media composer, of course! Orchestrators might also like a set up like this, and really even outside of film scoring this is going to be the most versatile general home project studio.

Recording Studio: This one is the most likely to be outside a person’s home. Electronic studios and film scoring studios (and more advanced composition studios) will all have some ability to record, but you may find that the time comes when you need to do more than you can or at a higher quality than you can do at home, so you’ll want to go to a professional recording studio. Or you may want to run one. Here, you’ll need to set up an extensive live and mixing room (two separate rooms), and your mic locker (what microphones you have) will need to be quite extensive. You’ll also want either a mixing board or a large format audio interface (or several) with lots of inputs - at least 24 if not 48 or more, and a very nice studio monitor speaker setup. The software tool will almost certainly be Avid Pro Tools for this setup, and you’ll need to get very good at using it if you want to run a recording studio. Most likely to be hired by any of the other types, or run by an “audio engineer”, or sometimes a “recording engineer” or “mix engineer”.

Thoughts on Starting to Build a Studio

This is an ongoing process you will probably be engaging in for the rest of your life in music once you start building your studio. It can be daunting to get into because of how much equipment it seems like you need and how much a lot of it costs. Also, there are websites and trade publications that feature really high-end professional studios, and comparing what you have to them, it can feel like you’ll never have enough. But you don’t need that much equipment to start with, and most of those people didn’t build their studios overnight. You start with a modest studio and then add on and upgrade your gear over time after that. Keep in mind that once you are making money with your studio you can theoretically justify spending more money on it as a business investment. As a student or as an amateur it is harder to justify thousands of dollars all at once without some way to make a return on that.

My story:

I got my first real studio setup at the end of 8th grade going into high school (this was in 2004), and it took the form that I have labeled as a “composition studio” above, namely a laptop computer and Sibelius 3 notation software. I used a kid’s keyboard we had got for me a few years prior until I was able to get a 61-key midi controller keyboard a couple of years later, the M-Audio Axiom 61. I got a couple of programs for sequencing over that period, and we started adding elements bit by bit - a sample library here, a microphone and a small mixing board so I could record myself, and by the end of high school I had a modest but usable personal project studio. When I went to Berklee, they required us to get a MacBook Pro and a software suite that we bought through the school, and I kept building my studio up slowly. At this point, I have been building my studio for about twenty years, and I’d say I now have a reasonably decent midlevel professional film scoring studio, which can also be used for many other types of music composition and production. There are still plenty of aspects of it that I’d like to upgrade at some point, but it works for me at the moment.

Getting Started Today:

If you are starting out today, here’s what I’d recommend to get started. You probably already have a computer, and you can get started with whatever you are using now for that. If it’s a Mac, then it has Garageband built in, which is an adequate DAW to start with. On PC, there are other free DAWs which I haven’t tried, or you can get lite versions of Cubase among other programs. If you want to try notation, my favorite free program is Musescore, or look into getting a lite version of Dorico or Sibelius, which will be cheaper than the full “pro” or “ultimate” version. It will be limited, but you might be able to start that way and upgrade when you are ready to do so. If you are a student, take advantage of student discounts for software while you are still in school.

There are various free or relatively inexpensive synthesizers and sample libraries to start with in that world. Eventually you will want to upgrade to commercial grade synthesizers and libraries, but you don’t have to buy them all at once. This is an area that you will keep buying pretty much forever. I have a set up that works for me now, but I still get new synths and libraries from time to time when they seem intriguing or when I have some extra money or when a particular need comes up in a project I’m working on. I’ll note that it took several years for me to get to a setup I was particularly happy with. One thing to consider is what comes built-in to your DAW software. Logic Pro has the best overall suite of included synths and effects, though if you are writing for orchestra, I’d still recommend something else over the included orchestral sounds. But for electronic music, Logic is excellent as a starting point. One of my favorite synths in Logic is Alchemy, which holds its own even against the 3rd party commercial synthesizers. It used to be one of the 3rd party synths until Apple bought it. The version they released in Logic was a significant upgrade over the version that existed before. Logic is also relatively inexpensive compared to its professional level competition. With most other DAWs, I’d recommend mostly getting 3rd party synthesizers and libraries sooner, some of which I mention below.

Hardware-wise, once you have a computer I’d start with a midi keyboard of some sort, and you’ll want an audio interface and ideally real studio speakers as soon as you can get them depending on what your studio goals are. You can start with computer speakers (external - built-in speakers will never be adequate for this work, even as nice as the speakers on my MacBook Pro are now), but they won’t be as useful for mixing as studio monitors. Headphones are another option as a stopgap, but there are other problems with mixing on standard studio headphones and you are better off with speakers as soon as practical. You will want headphones as well for recording yourself or others if nothing else, but mixing on them should be a last resort.

If your goals are primarily to write for acoustic instruments, then you really may not need much more than I mentioned above in the “Composition Studio” section. A laptop or desktop computer (or an iPad), notation software, maybe Wallander Note Performer for playback rather than the built-in sounds for those programs (note that it only works with Finale, Sibelius or Dorico and only on the computer, not the iPad), and a 61-key midi keyboard will get the job done pretty well for composing for acoustic instruments. You don’t even really need the keyboard, but I find it helpful to have one in front of or next to me while I am writing just as a guide to figure certain things out, and it can be handy as an input device in notation software as well. As I mention in Life Tips Composition Tip No. 7 and in the Multi-Instrumentalist section of my main paper, keyboard skills are essential for composers, and this is one reason why I say that.

I would also think about writing on paper (or handwriting on an iPad) with your acoustic instruments or even no instruments, either singing or just hearing music in your head and trying to write it down from that, at least as an exercise now and then to build or maintain that ability so that you are not completely dependent on your computer to compose. I have a long section about that in the middle of the Musicianship portion of my main paper, but as a Musician it is good to be able to compose on your own terms. Writing on the computer can lead to writing in a rut and not thinking about things that are difficult to enter into the software, which is really aimed at pretty mainstream writing. I do most of my composing by improvising on my instruments and then working into the computer directly, but it doesn’t hurt to try it by hand now and then for variety. Some people even prefer to work that way. If you are into extended techniques or other so-called “New Music” elements, writing by hand at first will be almost necessary as the software doesn’t tend to work well with those kinds of things. You can do them in notation software but it’s best to enter them in the computer after you’ve composed them rather than trying to compose and fight the software simultaneously. Even if you do make music that works well in notation software, you’ll want to learn the software thoroughly as it is very frustrating to compose and struggle with the software at the same time. Having to break out of your composition mindset to look up a function of the software can make you forget what you were trying to write in the first place, so the less you have to do that the better off you’ll be. Obviously while you are first learning the software there will be a lot of that, but the sooner you can get past that stage the easier it will be to go back to composing!

One other tip for composing here - have some kind of recording device with you all the time. If you have a smartphone, the built-in Voice Memos app (or whatever it is called on your phone) works perfectly for this. You can be struck with ideas at any time, and if you can just whip out your phone and hum them into it, you can salvage a lot of ideas that would otherwise be lost to the ether. I can’t tell you how many times I would be in a practice room at Berklee and get some cool idea, and then have to run down the hall to my dorm room where my laptop was so I could try to put it down. Many potentially interesting ideas were lost in the two minutes it took to run down the hall, turn on the computer, and realize I’d forgotten my idea. If I’d had my handheld recorder, or today my smartphone, I could have just recorded a scratch recording on the spot in the practice room and then transcribed it into the computer when I got back. I also like having the Dorico app on my iPad now. There have been a couple of times that I woke up with a new tune running through my head from a dream, and I was able to roll over, grab the iPad and write it down within seconds. I named them “Dreaming in Tune No. 1 and No. 2”!

On to the product categories guide:

Hardware

Computers: Use whatever kind of computer you are comfortable with - there are endless flame wars online about whether Mac or PC is better (or even Linux), and really it comes down to your preference. I’ve used both Mac and PC. I used PCs growing up, used Mac for about 10 years between Berklee and the first few years after that, switched back to PC in 2018 and most recently back to Mac in 2023. I think the current Mac computers are the best available for music production, but boy do they make you pay for it! You’ll want a computer with at least 32 GB of RAM, a 1 TB SSD drive (or more), and a decent CPU. Probably anything in the Apple M series will be fine, but I’d recommend at least an M(x) Pro chip. My MacBook Pro has the M2 Max, which is probably overkill, but it’s a great computer! For PC, you want at least an i7 level Intel chip for the CPU. 

Either way, get at least a backup hard drive for your system drive, and another external SSD (or a second internal SSD if you have a desktop that can do that) which will be for your sample libraries. Capacity-wise, the sample drive should probably be at least 2TB today if not more, especially depending on how many sample libraries you think you’ll get; and the backup drive (which can be either a solid-state drive or a hard-disk drive) should be at least double the capacity of your primary drive up to 1TB for the primary drive. If you have a 2TB primary drive, you can probably get away with a 2TB backup because it will take you a long time to fill up that drive. You should have at least as much capacity for your backup drive as the primary drive.

Keyboard (Midi): My favorite brands of midi keyboards are M-Audio and Arturia. I’ve had M-Audio keyboards for most of my keyboards, starting with the M-Audio Axiom 61 back when I was first building my studio. I used that through my time at Berklee, then switched to the Novation Impulse 61 when I first came out to LA. I used that for a few years, then back to M-Audio with the Code 61, and I just replaced that after about seven years in Fall 2024 with the Arturia Keylab Essentials 88 mk3 to switch up to an 88-key configuration for my primary studio controller. I chose this one for very specific reasons I will address below. 

For my portable keyboards, I’ve mostly used Arturia keyboards. I currently have a 25-key Minilab and a Keystep 37. I’d use the Keystep all the time but it doesn’t fit in my backpack. I travel with it in my suitcase on trips where I have that, but I take the Minilab when I just have the backpack. I’ve been using it at CSUN for tutoring for sessions where I want my student to use the acoustic piano in the room but I still want to have a keyboard, or when I’m tutoring electronic music production in some form or other.

88-key keyboards will depend on what you want to do with them. Digital Pianos (for home or gig performance) may be different from studio controller use. I’ve loved Roland keyboards for a while - I think they have some of the best relatively compact hammer-action key action setups with the PHA-4 and PHA-50 actions. My home performance keyboard setup (which is what I practice and record piano on) is quite complex now, but the heart of it is a Roland FP-90 digital piano, which has the excellent PHA-50 action in it. I couple it with Modartt Pianoteq running on a Mac Mini for the primary sound source and it’s amazing! I’ve often found other weighted keybeds to be too heavy for my taste (particularly from Yamaha), though they can work for some people. Kawai is another high-end brand for digital pianos.

For a primary studio controller, I’d go with at least 61 keys at home, which translates into 5 octaves of keys (the formula is subtract 1 from the total keys, then divide by 12, for 12 notes per octave and one note to finish the top of the highest octave. 25 keys is 2 octaves, 37 is 3, 49 is 4, 61 is 5, then they jump to piano style, so 88 is a full size piano keyboard (a little over 7 octaves), and 76 is one octave less than that). I would also recommend getting one that has a modwheel, pitchbend wheel (ideally separate from the modwheel), and sliders and knobs. Some also have finger drum / trigger pads, and channel aftertouch which means that you can press the keys harder after you play them to send data about how hard you pressed them. All will have a sustain pedal input, many will have an expression pedal input (a continuous pedal that rocks back and forth to send data, used in organs as a volume pedal or on a guitar pedal board for a “wah” pedal among other things). If you can get one that has a continuous sustain pedal input, that’s helpful for playing acoustic piano effectively, but that’s more common on digital pianos than on controller keyboards. This is obviously too many controls to manipulate at once, but you can record things in multiple passes, so you’d have one pass for notes, one for modwheel expression (which is how many virtual instruments control volume expression), and then any other parameters you need to record. Also remember that when inputting many kinds of lines, you only need one hand (for flute parts for example if you are not using a wind controller for those), so you’d have the other hand free for something else on that note pass. It will take some coordination, and some practice time to get good at this - it’s an instrument like any other!

One other aspect I will mention here - I’ve had 61-key semi-weighted keyboards for the first 12 years or so of my “real” studio setup, and until very recently as I write this, I still used that kind of keyboard as my master controller keyboard. There are three types of key actions (with variations within them across brands and models): Unweighted, Synth Action, or Organ Action; Semi-weighted; and Fully-weighted or Hammer Action. Most of the time, 88-key keyboards will be hammer action, 76 could be any, and 61 or smaller will usually be either semi-weighted or unweighted. This refers to the resistance you feel as you push the key down, and I was surprised at the difference it made when I got a hammer action keyboard for my “performance” setup. I got the Pianoteq software several years ago, and being a physical modeling synthesizer, it always felt much more acoustic to play than most sample libraries I tried. But pairing it with a hammer action keyboard made a gigantic difference. The biggest difference I noticed was that the physical range of force (or speed, “velocity”) it could measure was much greater than my semi-weighted keyboards. One thing I didn’t like about those was how little force you had to use to hit 127 (the maximum value, see my midi guide below) on the velocity response, which makes it very hard to have subtle control over keyboard dynamics. One piece I’ve learned recently is Debussy’s Claire de Lune, which is so subtle that it’s effectively impossible on a semi-weighted keyboard. But both of the hammer action keyboards I’ve had have had a much greater physical range between 1 and 127, so you can get much more delicate pianissimos and really play a thunderous fortissimo when you get going (much more like an acoustic piano), and that is the single factor that made me start getting back into playing and learning piano again about six years ago as I write this in 2024, even more than the key range. I went from trying piano every now and then to practicing for an hour or more per day just from that one change, and my current home performance keyboard setup is very fun to play.

That being said, I still prefer semi-weighted keyboards for my studio controllers because I feel they work better for non-piano sounds most of the time (like string patches, synth patches, etc). Your mileage may vary, but that’s my preferred setup and it’s what I use today. This is why I chose the Arturia Keylab Essentials 88 for my most recent upgrade - it’s an 88-key that is semi-weighted, so I can use it more easily on all my sounds. I have the Roland for my performance setup, so I don’t need the master controller to be hammer-action. Another semi-weighted 88-key is the M-Audio Keystation 88, but it only has a modwheel / pitchbend, no sliders, knobs or pads like the Arturia does.


Wind Controller: I currently use an Akai EWI-Solo wind controller. I had an Akai EWI-USB for about 13 years before it began failing recently (pretty good lifespan for a relatively cheap electronic device you use by putting it in your mouth!). The EWI-Solo is great, and has the advantage of having built-in sounds and a speaker compared to the EWI-USB which could only work with a computer, but I did find that if I wanted to use the built-in sounds in public I’d need an external amplifier, as it’s just too quiet to work with most other instruments through the built-in speaker. It works for practicing at home, but in practice I’m mostly going to use it with my computer setup anyway to play in wind parts for my sequenced mockups (or to add wind-like expression to electronic sounds).

A wind controller will usually look somewhat clarinet-like, and will both translate breath pressure into MIDI breath control (MIDI CC 2), and notes into MIDI note values based on metal contacts you are covering with your fingers like the holes or keys on a wind instrument. You can also get Breath Controllers, which are headsets with a mouthpiece that translates breath pressure, but no instrument body. Instead you use it with your keyboard for note input. These are less common than wind controllers these days, but they can also work well for some people. Both of these devices are useful for playing wind parts on a mockup because they give you some of the natural expression that players of acoustic wind instruments would use which are harder to mimic with a modwheel, slider or foot pedal on a keyboard.

I originally used the wind controller with Wallander Instruments’ WIVI synthesizer, but when I last tried that it crashed on my new Mac, and I don’t think he’s developing that anymore, so I’ve switched now to using AudioModeling SWAM woodwind and brass for my orchestral sequence template today.

Audio Interface: I’ve been using Focusrite Scarlett interfaces for the past several years. I used to really want MOTU interfaces, and I still like them in principle but I haven’t paid that much attention to them recently. I also like the Arturia interfaces, and Apogee has several that are nice (or at least they did, but again I haven’t been paying that much attention to this space in the past few years).

An audio interface is a device for getting audio into and out of your computer. It’s a box that plugs into your computer, usually with USB these days, and then has audio out ports for studio monitor speakers and one or two headphone jacks, and then usually at least two and sometimes four or even eight inputs for microphones (XLR) and electric instruments (1/4”), usually combined into one port for both of those. Some of them will have additional ports, like 5-pin DIN MIDI connections, or digital audio in/out in various forms. The main thing you are choosing is how many inputs you want, and then what kind of mic preamp they provide - more expensive will usually be better, of course, but even fairly inexpensive interfaces have pretty good preamps these days. Let’s put it this way - if you are at the stage where you are concerned about the price of your audio interface, the quality of its mic preamps isn’t likely to be the weak link in the quality of your resulting audio!

Studio Monitor Speakers: This is an important category - this is how you will hear the music you are working on, so it pays to get this right. This is not the same as a set of hi-fi speakers or an audiophile listening environment. The primary quality of studio monitors should be transparency, or you hear what the sound actually is without additional enhancement from the speaker. You want a roughly flat frequency response between approximately 20 hz to 20 khz (the idealized human hearing range).

There are a variety of different types of speakers. These days, most home studios will work with powered near-field monitors, which means that they have a built-in amplifier and connect directly to your audio interface, and they sit fairly close to your listening position when in use which reduces the impact that room acoustics will have on your sound, though if you can get your room treated acoustically it will still help.

I’m currently using a pair of Adam Audio T7V speakers for my setup. There’s a lot more to say for this category, which you can find elsewhere online if you are looking to buy speakers, but I’ll leave that there for now.

Microphones: This is an area that you will probably never “finish” collecting. Depending on your musical goals you may or may not need much in the way of microphones. If you are primarily interested in electronic music then you don’t necessarily need microphones, but it might still be useful to have one or two for times when you do want to add in acoustic instruments or vocals. There are several types of microphones in wide use - Condenser (large and small diaphragm), dynamic, ribbon, and contact mics are all common in a studio environment, and they each have strengths and weaknesses.

If you only want one mic, or when you are just starting out, get a large diaphragm condenser microphone. These are the most versatile and have the cleanest and fullest sound. Common for vocals and strings in particular once you have a collection, but they work on most instruments. High end large diaphragm condensers include the Nuemann U87 and AKG C414, but you can get decent mics for around $100 from several companies that will be a reasonable starting point.

Dynamic microphones are common for high amplitude (high volume) instruments like electric guitar amps, drums, and other similar instruments. They are also common for live sound applications for stage use, because they can be handled with less noise, so if you see a singer holding a mic, it’s probably dynamic. The Shure SM57 and SM58 are the quintessential dynamic microphones.

Ribbon mics are only going to be found in studio use because they are very fragile, but they have a warm sound for vocals and other similar uses.

Contact mics are a special type of mic that you put on a wall or other large surface and usually they are used in conjunction with other types to add to the overall sound.

I have several mics at this point, mostly large diaphragm condensers, but also one small diaphragm condenser and a couple of dynamic mics. I mostly record myself and have never recorded more than one person at a time in my studio, so that’s been enough for me so far.

Another related area you might want to look into is field recorders - these are small devices with built-in microphones that can record to internal media, so they are useful when you are recording “out in the field” (making a recording of a live concert or even just recording sounds out in the wild for sampling purposes or something similar), as there is no need for a computer on-site. Just make sure you have enough batteries! I have a Zoom H2 recorder I use when I need a high quality recording, or if I just need a “scratch recording” I can even use my phone. The Voice Memos app on most smartphones is great for that sort of thing, but as good as the mics are now on smartphones, they are not as good as on dedicated field recorders like the H2.

Hardware Training (Audio Theory):

This would be a good place to mention that the book that I used to learn the fundamentals of audio production and studio setup is Berklee Press’s “Understanding Audio” by Daniel M. Thompson. This was the book Berklee recommended as a reference for the music technology placement test when I went there, and I had already read it cover to cover at least once by that point and nearly aced that test. Note that I used the first edition, but the current version is the second edition which is linked above. Most of the first edition is even still relevant today - the fundamentals generally don’t change rapidly!

Software

DAW (Primary): One of the most important decisions you’ll make for your studio is which DAW (Digital Audio Workstation) to get. This is the program you will be using to generate high-end audio output for your midi-sequenced or audio-recorded projects. I’d recommend getting a couple as they can be good at different things. I’m currently mostly using Apple Logic Pro, which I started using at Berklee with Logic Studio / Logic Pro 8, and now I’m on Logic Pro X (which has been the version number for about a decade at this point). [Update: Shortly after I wrote that, Apple released Logic Pro 11, so that is now the current version. Trust them to leave it alone for a decade right until I write that in my guide 🙂 ]. Logic is only available for Macs, so when I switched back to PC I also switched to Cubase Pro, which is also great. I might use that on my Mac once I upgrade to a newer version than I have right now and no longer need the USB license key. 

Honorable mention will go to MOTU Digital Performer, which I used as a film scoring student at Berklee. At the time, it had some of the most powerful tempo mapping features for film score cues, and I made extensive use of it for those classes. I never really liked the graphic interface as much as Logic though, and found it kind of fiddly to work in, so when I came out to LA, I switched back to Logic as my main DAW. If you are primarily working with recording studio sessions (as opposed to music production or composition / film scoring projects), then you should seriously consider Avid Pro Tools, which is the main industry standard for that world.

Most professional film composers also have Pro Tools so they can more directly interface with studios where they record their projects. They often write / mock-up in other software, but then set up Pro Tools files with any prerecord tracks that it would help the live musicians to hear and a custom click track which they then give the recording studio to record for the session. Then they can take the file back from the recording studio to their own studio to mix, or send to another studio to mix, as it’s relatively rare to use the same studio they record in for the mix session if they are hiring a professional recording studio.


DAW (Secondary): Once you have your primary DAW, you might want a secondary one too for electronic production. If that’s mainly what you do this might even be your primary DAW. Ableton Live is the go to for that world, but I personally prefer Bitwig Studio for electronic production. Logic Pro also has a lot of good features for that side of things too, but Bitwig or Ableton are more focused on that sort of thing. Reason Studio is another one to look at, and it comes with many good synthesizers built-in (as does Logic). Bitwig comes with a whole synthesizer-building environment in the form of The Grid. I don’t know that much about Ableton, not really ever having used it myself. I tried it a few times over the years but could never really get into it.


Notation Software: The other primary software program you will be using in your studio for composition / film scoring projects is your notation program. If you do projects that need notation, I’d really recommend getting a notation program and not trying to use the built-in notation editor in your DAW.

My current program of choice is Dorico Pro from Steinberg. I got in on this a couple of months after v1 launched, when it was still very rough around the edges, but with v5 currently I’d say it’s a fairly mature program at this point, and really feels like the only one of the “Big 3” to be innovating these days. The Big 3 are Finale, Sibelius and Dorico.

I used to use Sibelius from 2004 until I got Dorico (and really until I switched to Dorico full-time in 2021 when they got rid of the USB license key requirement - I hate those things!). I still like Sibelius, and I still maintain my license for it, but I haven’t used it for myself in a few years. I have tutored it for some of my tutoring students at CSUN though who are still using it. 

I tried Finale back when I was picking my first program in 2004, and found it hard to think with musically for the way my brain works, but I do know people who seem to like it ok.

[Update Fall 2024: After I first wrote this section, Finale launched an earthquake in the professional music world, announcing they were shutting down development of the program going forward. As of Oct. 2024, you can no longer buy it, and as of Oct 2025 pending any change of plans by that point, you can no longer activate it on a new installation. They endorsed Dorico as the way forward, with a massive discount (half-off the standard cross-grade price, which was already around half-off the full price). A lot of people who were still on Finale, including several friends of mine in Los Angeles, have had to scramble to figure out what to do about this, and it caused such a stir that even the mainstream press picked it up [mainstream here meaning non-music-industry press]. Seriously, there’s even a Slate article about this, though I’d challenge their apparent claim that anyone who’s anyone in music was still using Finale. Some were, many had already moved on. But it certainly is true that Finale closing down is the end of a major era of music technology and this is a big deal historically even if it didn’t totally shock me, as I haven’t heard a whole lot about development of Finale for several years now anyway.

So at this point, we’re left with the two programs I’ve used myself, Sibelius and Dorico, and I’ve been fielding questions and tutoring for Dorico from several people I know over the past few months. I’m still using Dorico myself as I mentioned above. /endUpdate]

If you want to start with a free program to get into this world, I’d recommend MuseScore, which seems like the best free software at the moment. I’ve tried it, and it’s not what I’d use today but it’s a lot cheaper and will get the job done for a while. I don’t have a problem with it, I just have access to better, more powerful tools now. Several students at CSUN are using it, though our professor is trying to get them to switch to a professional program since in theory if you are majoring in composition you want to be a professional someday, and learning the tools in school is better than waiting until later (it doesn’t get easier to learn new programs with time!). For what it’s worth, he’s also endorsing Dorico, and is trying to switch to it himself from Sibelius.

Sample Libraries / Synthesizers / Virtual Instruments / Notation Playback:

This is far too big a category for me to give more than suggestions from what I use.

To define terms quickly:

Sample Library: A collection of audio recordings programmed to be played from a keyboard or other midi source, frequently of acoustic instruments, but you can find sample libraries of almost any kind of sound nowadays. Played from a “sampler” program, the most common of which right now is Kontakt from Native Instruments (part of any Komplete package).

Synthesizer: In this context we’re mostly talking software synthesizers, so this is a computer program that generates sound using any of a variety of methods (generally not including sampling, or only using it as one of several techniques, otherwise it’s a sampler, not a synthesizer). Common methods include subtractive synthesis, additive synthesis, FM synthesis, wavetable synthesis, (sampling), physical modeling synthesis, and granular synthesis. Many modern synthesizers use several of these methods at once for a wider variety of sounds. See below for a guide to these synthesis techniques.

Virtual Instrument: can be either a sample library or a synthesizer, most likely using physical modeling or additive synthesis, to generate sound like a specific kind of acoustic instrument. I generally only include really expressive instruments, not sample libraries that have recordings of things like crescendos. You should be able to play all your expression in your way like a “real” instrument rather than relying on canned performances in the sample library. But there are some sample libraries that can do this, such as instruments from Samplemodeling.

Notation Playback: what it says, a program that plays back audio in a notation program. They all come with their own sounds, or you can use a third party tool.

For notation playback in the “Big 3”, I like Wallander Note Performer best. It’s good enough that for many projects I write in notation software I don’t need anything else, but it’s not perfect. I haven’t found anything better overall though, and it really is quite good.

What else you get will depend entirely on what you want to do with your studio and music, but for a general all-around studio, I’d recommend the Native Instruments Komplete package (ideally Komplete Ultimate, though that’s pricey - but if you can afford it I’d go with that as it’s very comprehensive at first. I worked up to it from lower versions and upgraded over time to Komplete Ultimate). I’d also recommend Arturia’s V-Collection for general electronic synthesizers.

I’m a huge fan of physical modeling synthesis, especially for re-creating acoustic instruments. Many composers use sample libraries for that, and they can be fine, but the good ones are very expensive and the cheaper ones are often a little limiting. I started with the Garritan Personal Orchestra sample library, and today there are several decent free general orchestral libraries. At CSUN, I helped them install the Project SAM’s Free Orchestra 1 + 2 and a few other sample libraries in the Music Tech lab for this current semester (Spring 2024), though I haven’t used them myself very much since I have commercial grade products now.

But I’ve tried to switch all my acoustic emulations to modeled synths as much as possible, so at the moment I have:

  • Woodwinds and Brass: AudioModelling SWAM collection, with EWI Solo wind controller.

  • Strings: Synful Orchestra (I want to get the AudioModelling strings some day), or I just export audio from NotePerformer into my DAW. I also record live violin and viola myself over the synths when I have time to do so, usually 4 tracks each for first violin, second violin and viola.

  • Percussion: This one varies - I have modeled drumset from IK Multimedia’s MODO Drum; samples from several companies, including Native Instruments Komplete and Garritan; and epic percussion from Komplete, SoundIron, and 8dio. I also use Heavyocity Evolve from Komplete for percussive loops, orchestral tam-tam and bass drum, and sometimes for transition effects.

  • Piano, Keys, Pitched percussion: Modartt Pianoteq, Modartt Organteq, and Arturia V-Collection

  • Bass (guitar and upright): IK Multimedia MODO Bass

I also really like Applied Acoustic Systems Modeling Collection, and in particular Chromaphone, which is a lot of fun for making electronic sounds with an organic feel.

My favorite general electronic synthesizers today are Xfer Records’ Serum, and Arturia’s Pigments.

That should be enough to get you started!

Software Training:

You can find a lot on Youtube, but two of my favorite resources for professionally produced software training are linkedinlearning (formerly Lynda.com), and MacProVideo.com (which you can use even if you are on a PC). I actually wrote a paper for one of my classes at CSUN (Teaching Music in Higher Education) about how to learn software, where I mentioned some resources for specific things.

One quick general tip here: Whenever you are learning software (especially your primary software - DAW, notation software, etc), always try to learn to do as many things as possible with keyboard shortcuts. It will make your use of the software much faster and more fluent, and it has the benefit of being impressive to anyone who happens to watch you use it too! There are some standard system-level shortcuts that everyone should know, and these work in pretty much every program made in the last several decades. To do these, hold down the command button (cmd) on mac or control button on pc, and then press:

cmd+z= undo (the one I probably use the most!) [note: the convention I’m using here means hold down command button, then press the z key while holding the command button. Same for all the other shortcuts below]

(cmd+y= redo (not as common as cmd+z, but many programs allow you to “undo your undo” with cmd+Y)

cmd+a = select all (selects all text or objects on a page)

cmd+c= copy selected items to clipboard, so that you can

cmd+v = paste (contents of clipboard where cursor is)

cmd+x= cut (copies selected items to clipboard for pasting elsewhere and deletes them from where they are now)

cmd+s= save

cmd+n= new file (usually, or new something at least)

Most other shortcuts are more specific to the software you use, but many will be extremely helpful.

Not quite software training, but my all-time favorite book for learning about electronic musical synthesis is Martin Russ’ “Sound Synthesis and Sampling”. It’s quite comprehensive while still being pretty easy to read. Most of the books are either at the level of “if you turn the cutoff knob, the filter cutoff will change”, or “here’s some c++ code to implement an FM synthesizer, and here’s the math to prove how this works”. Russ’ book is a good middle ground, not too technical but well beyond the absolute basics. It also talks about how music studios work and has a section on performance control of synthesizers since synthesizers allow you to separate the control from the sound production and achieve some interesting results.

Extras

Most of the things above are recommended for most project studios in some form or other, but there are some other elements that can be nice to have but aren’t really necessary.

Button Controller (Macro Controller):

This is an interesting category of device - if you don’t have one you may not see why you’d need one, but once you do they make a lot of things much easier. Basically they are keyboard shortcut button machines. You set up a macro (a combination of one or more keyboard shortcuts performed in a row), and then assign them to a physical or touch screen button, which you can then lay out and label in a way that makes sense to you. I think the most popular one today (at least the one I’ve seen the most and that I have) is the Streamdeck by Elgato. They make several versions. I have a “regular” Streamdeck that I used in a few different setups but eventually became primarily the control panel for my performance keyboard setup, which makes Pianoteq feel more like the “built-in sound” of my keyboard setup. I can control preset selection from my custom Streamdeck configuration, and change octaves in Pianoteq and control certain additional parameters, which also allows me to record those changes in the midi file generated when I play my improv sessions. I also got a Streamdeck XL, which has more physical buttons (32 vs. 15 for the regular), and I use this with my studio setup to control Dorico, Bitwig or Logic more effectively. These days you can find preset Streamdeck configurations for most professional software (even Zoom). I like Notation Express for Dorico and Sibelius, and I’ve used SideshowFX configurations for Bitwig and Logic. You can customize these professional configurations as well.

You can also get the Streamdeck Mini with 6 buttons, though that seems too limited to me, as well as the Streamdeck+ with a few buttons and 4 assignable knobs, and they have an app for smartphones to turn your phone into another Streamdeck (for a few dollars per month subscription).

Control Surface:

There was a period when I really wanted a hardware control surface for my studio to get hands-on control over my computer plugins and mix settings. This was actually my original rationale for getting an iPad, but once I got one I discovered the way I had set up my templates didn’t lend itself to using a control surface and I’ve never really gotten into that world after all. These can be handy devices anyway, and are worth looking into if you can afford them. I don’t know as much about the current state of the market for control surfaces, but the one I really wanted was a modular system called the “Artist” series from Euphonix, later acquired by Avid, and consisted of the MC Mix (which you could get up to 4 of, for 32 faders/channel strips), MC Control (with 4 more faders for a total of 36 and a screen with soft knobs for controlling plug-in parameters), and MC Transport (which had a giant jogwheel plus transport controls - play/pause, stop, etc).

Touch Screen:

This is mostly for PC users right now, and is actually the main reason I switched to PC for my last upgrade cycle. I wish Apple would allow touch screen support for Macs. I tried this on my PC setup and it was sometimes nice to be able to reach out to control certain things from the screen. It wasn’t enough of a draw to keep me on PC for my most recent upgrade cycle, as I wanted the raw power of the new M-series chips more than touchscreens, but if you have a PC then touch screens can come in handy now and then. Not a necessity by any means, but try it out if you have a chance. If you are on Mac and want to try it (and have an iPad), then you can try things like the app Duet, which can turn your iPad into a second screen touchscreen for your Mac. Apple has a built-in screen share function for the iPad with Mac, but it doesn’t support touch screen abilities on the iPad.

Dual Screens:

This one I would almost call a necessity for your primary desktop setup - I have a hard time going back to single screen setups now for serious production work, but you can do everything with just one screen. When I’m recording at home, it’s nice to have my music on one screen and my recording software on the other simultaneously, or even for office work, the ability to see your own documents on one screen while seeing Zoom or even someone else’s screen sharing on the other is very nice. Sometimes I have a spreadsheet on one screen and my web browser on the other. Even just having an app on one screen and Finder windows (or File Explorer windows if you are on PC) on the other is convenient. Many production programs feature ways to use two or more screens for that program - sequencers often allow you to show the Arrange page on one screen and the mixer on the other, for example, though I have gotten used to using single screens per program for those sorts of things most of the time. I might sometimes open a plug-in window on my second screen while keeping the main window on the first, but otherwise I usually use different programs with my dual screen setup. Once you have two screens you’ll find all sorts of ways to use them, but if you only have one you may not miss having two.

iPad for Music

This one is not strictly a studio thing, but there are several ways to use iPads for music. iPads have historically been much more suitable for music than other tablets. I haven’t looked at the Android music ecosystem lately but I haven’t heard anything to suggest that the situation has improved dramatically in recent years. That being said, one of my favorite sheet music apps is available on Android as well (see MobileSheets below in the Software section), and it also works on Windows, so it covers all the bases. iPads have generally been more useful for production / synth purposes though.

These days, the primary way I use an iPad for music is as a sheet music reader / digital music stand. I’ve switched to using it rather than paper sheet music in most of the orchestras I play in, and that has led to several questions from people regarding its utility or asking for recommendations, so I’m going to put some answers to those questions here, and then I’ll mention some additional studio uses after this part.

iPad for Sheet Music

Why would you want to use an iPad for sheet music in the first place? I’ve found a few reasons why I like it. First is just the coolness factor - it’s still new enough to seem “cutting edge”, and a lot of people think it’s neat and ask about it (I’ve definitely sold at least 2 or 3 iPads for Apple in the last 3 years, and maybe some others - they should pay me a commission!). On a more practical level, there are three other reasons I like it:

  1. Database - it’s a much easier system to keep track of the music you have, and to search through and sort the music you have so you can find it easily compared to paper music. It also helps keep track of what all I’ve played over the years with the setlist / playlist features in most sheet music apps. I’m a tech nerd as you can probably tell from this studio guide, so I just like tracking stats about my life in general and my musical life in particular. I also have a spreadsheet I started shortly after coming out to LA where I’m tracking all my orchestra concerts from the end of 8th grade forward - up to 130 distinct concerts over the past 20 years now (nearly half of which were in the last three years!). But the iPad allows me to see much of the sheet music from those concerts as well. I also keep digital scores for as many pieces as I can get, both that I’ve played and some I’ve just studied, and I can have them with me at rehearsal which has been handy on occasion.

  2. Foot pedal for page turns - this one is handy for string players in particular. Copyists often assume that because we have stand partners in the string section (violins through cellos at least - not basses), that we can have one player turn the page while the other keeps playing. This mostly works out, though after COVID we actually have more people using their own stands to have a little more physical separation from each other, and copyists who put page turns across a passage often seem to do so at featured moments where it’s awkward for half the section to drop out for a few seconds. If you are going to do that with your parts, please try to make it a passage where that won’t matter! Better yet, put the page turn at a rest point, even for the string section. But with an iPad and a page turn foot pedal (I have the Airturn Duo, which works great for this), I can turn the page with my foot and keep playing simultaneously. This is also useful because when using an iPad we usually only show one page at a time, not a two page spread, so we have page turns even where the copyist didn’t realize we would have one and didn’t plan for it. I will note that it took me several rehearsals (I say about a month) to learn to coordinate my foot with my instrument playing, as I’d never had to use my feet while I played violin or viola like that before!

  3. Ability to have all music with you all the time / save markings between orchestras - somewhat related to the Database point above, but it’s nice to have my entire collection of sheet music with me all the time when I’m out. I’ve found myself in conversation with people at one rehearsal talking about some other piece, and I had that one to show them as well! I can mark up my sheet music with my Apple Pencil (other styluses or even your fingers can work too), and add bowings or additional expression marks, circle troublesome notes, mark in beat slashes for rhythmic aid, etc. (all the things you’d do with a pencil). By using the Score Layers function in my preferred sheet music app (ForScore - see below), I can save those from one orchestra and hide them when I’m playing the same piece elsewhere. They’re available if I want to consult them, or compare between orchestras, but they don’t get in the way when I’m doing something else. With piano, or even orchestra sometimes, I can also write in a detailed theory analysis (harmonic analysis, formal analysis, etc) to help me understand the piece better, and then hide that when I’m playing so it’s not distracting but I can pull it up when I want to. I was just having a conversation with one of my students the other day where we discussed that ability. With paper sheet music you’d need two copies of the sheet music, or if you marked your analysis on the playing copy you’d have to see it the whole time and it might be harder to read.

Cons: I would be remiss if I didn’t mention a couple of potential problems with iPad sheet music to be aware of. If you are careful these may not be huge issues, but they can still come up.

  1. Battery / App Crashes: The biggest potential issue is that the iPad could run out of battery or the app you are using could crash in the middle of a rehearsal or (got forbid) a concert. I always try to start a rehearsal or concert with as close to a full charge as I can, and I have a portable charger I usually bring with me so I can charge while I’m out if necessary. The closest I’ve come to running out of battery was my longest day of playing in Fall 2023, where I played for nearly 8 hours in one day with just a couple of breaks between activities. Without my charger it would have died, and it hit about 15% battery by the end of my evening concert that day. Otherwise the battery has mostly been fine, but this can be an issue.

    I’ve found the apps are pretty stable - they need to be or people will never trust or use them. But they have crashed now and then, fortunately never during a concert. They haven’t crashed much, but when they do, it’s usually while I have other apps doing things as well, so it might not hurt to close other apps while you are in a mission critical environment to minimize the odds that you’ll have problems. It also would be a good idea to have a paper backup just in case you might need it (make sure you bring it on stage with you - it won’t do much good in your bag backstage! Not that I would know from experience…)

  2. Screen Size: Even the biggest (12.9”) iPad Pros are not as big as standard part or score paper, which is usually 9” x 12” (roughly a 15” diagonal per page). The 12.9” iPad Pro is marginally smaller than 8.5” x 11” (standard letter size) printer paper, which has a diagonal of 13.9”; so if you can read off of that, you should be ok with a full size iPad Pro. I wouldn’t want to share with anyone else if I was using anything smaller than that model, and people with worse eyesight than me sometimes don’t like even the full-size iPad Pro, but it works for me even with a stand partner. Depending on lighting conditions, glare can be a problem, so you might need a matte finish screen cover if that’s the case, but most of the time I find it works. One benefit of the iPad is having a built-in stand light! So lack of lighting on stage isn’t a problem for me anymore. Actually having very bright stage lights or playing outside in the sunlight can be a problem, because the brighter your screen is the faster the battery will die.

Recommendations for iPad Hardware / Software:

Hardware:

iPad: As mentioned above, I’d recommend the full size iPad Pro (12.9” - bigger if they make a bigger one when you read this!). Study scores work on smaller iPads - if you can read a print miniature score, you can read a smaller iPad most of the time (the iPad Mini’s screen is about the size of a page in a Dover miniature score, which are larger than some others like Boosey and Hawkes miniature scores). But for playing from an iPad, bigger is better. You don’t absolutely need the latest model for sheet music - all the sheet music apps really do is provide a custom database wrapped around a pdf reader with a custom annotation engine designed for marking up sheet music, so they don’t need a ton of computing power. I’ve seen people even just use standard pdf readers on the iPad for this. If you get the current model, it will be longer before you’ll feel the need to upgrade to a newer one, but if you can find a used one at the right price that would be fine. I have the 4th gen (2020) model, and it’s still working just fine for what I’m doing with it in 2024. Someday I’ll upgrade to an M-series iPad, but I mostly use mine for sheet music, reading kindle books and other PDFs, notes for my grad school classes, and standard media consumption tasks - web browsing, videos, etc, so it’s fine for all that. I’d get a minimum of 256 GB of drive space, especially if you are going to do anything else with this iPad besides sheet music. I got the 512 GB model and it’s nowhere near full. (For reference, after 3.5 years of heavy use, mine is at 169 GB, so 256 GB would have been enough. Your mileage may vary based on your use, of course.)

Stylus: I’d really recommend getting some kind of stylus for your iPad for sheet music. I have the Apple Pencil, which is expensive but very nice, and the tip is nice and thin so it feels more or less like writing with a pencil / pen. Many cheaper styluses (styli?) feel thicker, more like writing with a marker, or even a crayon - those are usable but much less comfortable. If you don’t have a stylus, the apps will work with just your finger, but you’ll probably find you have to zoom in on the sheet music while annotating to mark with accuracy. Sometimes that’s even useful with the Apple Pencil. However you choose to do this, you should learn how to use it effectively with the app you choose. For what it’s worth, I basically ignore most of the extra features of the annotation engines in my apps and just treat it as a pencil on paper. It’s readable enough and very obvious which marks are mine and which were in the printed part. I will sometimes use the highlighter feature with different colors, and I also like the white-out function to hide pre-existing marks or delete printed marks we’re going to ignore, but mostly I use the default line tool in black. The main “power user” feature I use in ForScore’s annotation system is the score layers function (see point 3 above).

Pedal: One of the big benefits of using the iPad (or any digital music stand system) for sheet music is having a page-turn pedal, so I can keep playing across page turns. As mentioned above, I use the Airturn Duo pedal, and I like it a lot! It’s probably approaching time to pick up a new one as the contacts for the page turn pedals are getting stuck more often and turning several pages at once, but it lasted basically six years with heavy use for most of that time (especially the past three years), and the battery on these pedals lasts basically forever. I only have to charge it about three or four times per year, and they can last for most of six months on a charge when they are new in my experience.

Software:

Sheet Music Reader: I have two apps in particular that I like for my primary sheet music reader.

I mostly use an app called ForScore, which is a one-time purchase on the app store and I think is worth every penny (it was about $20 last I checked). This is the app I settled on about 10 years ago when I first got into iPad sheet music with my old iPad 2, before I was using it out in the world. It has a solid database system, though as a data nerd none of the apps I’ve tried have had quite the database I’d like. One of my “day jobs” has been metadata manager for production music libraries, so I’m a stickler when it comes to data for music libraries, whether tracks or sheet music. But ForScore’s database is overkill for how most people use it, and they let you have multiple libraries if you have different kinds of sheet music you use. My libraries are Orchestral Parts (and Scores), Chamber Parts (and Scores), Solo Classical Strings, Piano, Traditional / Folk, my Original Tunes, and then a library for Books About Music (PDF theory books, etc). I’ve taken the time to set up setlists for my orchestral concerts back to the end of 8th grade matching my spreadsheet, though I don’t have all the music we played over that time. It did take some time to get that set up, but it’s convenient now. ForScore also has a Mac app, and it syncs across iCloud with my performance keyboard system, and my MacBook Pro which I can use when I’m doing Zoom tutoring with someone among other things.

My other app I’d recommend is relatively new on iOS, though it’s been on Android for a long time and on Windows for a while too, so if you have other systems like that it might be a better fit. It’s called MobileSheets, and I’d say all else being equal, its database is better than ForScore’s. The reasons I’m still using ForScore myself are because it’s more compatible with other people - most other people I know who are using digital music stands are using iPads with ForScore, so we can share music directly with AirDrop and get all the data transferred including database metadata and annotations, whereas ForScore to Mobilesheets would transfer less of that data (and Airdrop would work but only between Apple devices, not Android or Windows). ForScore also allows me to sync Apple Music streaming tracks with my scores / files, and trigger them from the sheet music or sync page turns with the audio, which is a fun way to listen with the score. For Mobilesheets you have to have the audio downloaded without DRM, so Apple Music doesn’t work yet and you take up more drive space with that. It’s not necessarily a dealbreaker, but ForScore’s version is better on the iPad. Support is better with MobileSheets - I’ve emailed Mike, the developer for Mobilesheets, and had significant personal correspondence with him, while ForScore has a more substantial ticket system that feels much less personal. Both apps have built-in utilities like a metronome. ForScore also has a built-in tuner, I can’t remember if Mobilesheets does. I also just realized that MobileSheets can be used on Mac too, though I haven’t had a chance to try it that way yet.

Other Apps: There are a lot of useful apps for sheet music besides these two. The other sheet music app I like in particular is called nKoda. IMSLP is my go-to source for public domain music, especially in the string world / orchestra world. But you can’t get copyrighted music there, or at least you’re not supposed to be able to. nKoda charges a monthly subscription fee, and in exchange you get access to scores and/or parts for 20th century copyrighted music in addition to urtext editions of PD music (and some of the same PD sources IMSLP has). I don’t like their reader app so much - I’d really prefer to have it be a source for ForScore instead, but it can be useful to have access to that music in some form at least.

One other kind of app I’d recommend is a scanner app for your phone. I use one from Microsoft called “Microsoft Lens”, but there seem to be hundreds if not thousands to choose from. I use this as a way to rapidly scan my printed parts at the first rehearsal for any orchestra music I couldn’t get online, and then AirDrop it on the spot to my iPad. If I have 10-15 minutes before my rehearsal starts, I can usually get all the music on the iPad by the beginning of rehearsal, and then make all my marks from day one on the iPad.

So that’s some guidance on using the iPad for sheet music, which is the primary way I use it for music these days. But there are other ways to use it more geared towards studio use, so let’s explore that next.

iPad for Studio Use:

There are really three ways I might think about using an iPad as a production tool as opposed to a sheet music reader / digital music stand. One is as an instrument itself, another is as a production system (Notation / DAW), and the third is as a control system for a bigger computer production system (basically as a control surface).

  1. Instrument: An iPad is a general purpose computer, just like a laptop / desktop (or even a phone), so one thing you can do with it is run virtual instruments / synthesizers on it. People tried this as far back as the original iPad in 2010, but it really felt like they started getting good around 2013 or 2014. iPad apps are often (much) cheaper than laptop/desktop programs (I’ll just say desktops from here on out) - I’ve seen some apps where the desktop version was $300, and a basically feature compatible iPad version was $30. That’s expensive for an app, but cheap in comparison. There are a lot of iPad synths to choose from, as well as ways to use them to make music. Some of my personal favorites are pretty much anything by VirSyn or IceGear, Modartt Pianoteq (which was finally released in 2022 as an iPad / iPhone app, and it works with your existing desktop license for whatever add-ons you’d already bought that way), and the Korg apps like iWavestation, Odyssei, or iM1. My favorite general purpose subtractive synthesizer (see below) on the iPad is Zeeon. But this list is very personal, and there are hundreds of other interesting apps that are worth exploring and considering. I also haven’t looked that much recently at any new synths over the past few years as I’ve been back to building out my desktop system and my performance keyboard setup with Pianoteq, so my money has been going there instead of iPad synths.

  2. Production system: Once you have some iPad instruments, you need something to do with them. You can just get some hardware adapters and plug your iPad into a keyboard and an amplifier or mixer system to play them on stage or record them as external instruments in your studio environment, but you can also use most of them “in the box” on the iPad itself to make tracks. There are several reasonably powerful DAWs on the iPad these days. I have Auria Pro for mine, but Cubasis from Steinberg is another popular DAW. Apple recently came out with an almost full version of Logic for the iPad for a $5/month subscription which a lot of people like (maybe not so much the subscription part!). Korg has a self-contained DAW called Korg Gadget that was popular a few years ago. They’ve made some moves recently about opening it up a bit, but I think that was to allow you to use their built-in synths in other DAWs, not to use third-party synths in Gadget, which is what I’d like them to do. I made a track with Gadget a few years ago that wound up in my Hybrid / Electronic demo playlists on my Music page - see “The Haunting Woods”.

    Another app to consider for making music with an iPad is called AUM, which allows you to run plug-in (AUv3) versions of both iPad (and iPhone) synths and effect units in a really easy and fast system where you can combine them together. It’s a great way to improvise a piece across several synths without having the hassle of setting up all the tracks in a full DAW. Recording the improv session might be more cumbersome, but live playing either for yourself just messing around with it (which can be a lot of fun by itself!) or in a live performance might work better this way, and there are midi recorder apps / plugins for the iPad like piano roll editors that can be used in AUM for looping or recording certain aspects. I like Atom | Piano Roll 2 myself for this function, and there’s a set of apps called Xequence AU (Keys and Pads) that give you more powerful touchscreen midi keyboards or drum pads than you get by default with AUM or most other built-in touch keyboards. This can be a great platform for live electronic music.

    I’ll point out that a lot of the uses of the iPad for music production will tend to be aimed at electronic tracks, not acoustic recordings, though with the right additional hardware (audio interfaces and mics, etc) you can use some of these apps for multi-track acoustic or electric music recordings as well as electronic synth-based production. But this isn’t going to be as powerful as your desktop / laptop studio for everything you might want to do in music. It is a great way to get into this world for perhaps a lower upfront cost if you just want to mess with things or maybe to get kids interested in exploring music, with the understanding that you may find that you (or they) want to upgrade later on if you don’t otherwise have a desktop / laptop studio system. It can also be good when you are out traveling if you don’t want to bring a full mobile laptop system but you want to have something at least. An iPad by itself, or with an external hardware portable keyboard (maybe two or three octaves) can be effective in certain contexts, or again, just for having fun messing with music (never a bad thing to do!).

    Another use of an iPad as a production system, just like with a desktop setup, is a notation program. Both Sibelius and Dorico now have good iPad apps for their software. I use Dorico on the iPad just like on my desktop setup, and in fact one reason I chose to switch to Dorico as my primary notation program was because of how good the iPad app is. Sibelius is pretty good on the iPad as well, but they really want you do use it with a qwerty keyboard - either the keyboard cover or an external keyboard. Dorico managed to make a really good app that works with just the iPad by itself - you don’t even need the Apple Pencil. I do wish that I had NotePerformer on my iPad for playback. The built-in playback tools are not terribly good for either app, or anything else I’ve tried for acoustic emulation on the iPad (other than Pianoteq, which does work well on the iPad, though with notation software my iPad chokes trying to play back Pianoteq from Dorico. I would need to upgrade to a newer iPad to do that well). As I mentioned way back near the end of the “How to think about starting a studio” section above, I have loved having Dorico on the iPad and just being able to reach over in the morning from my bed and write something down immediately if I wake up with a tune running through my head, which has happened in real life!

  3. Controller for a Larger System: This is the original case I made for why I wanted an iPad back in 2011 when I got my first iPad (2). I thought it would be a cheap(er) way to get a hardware control surface for my main studio setup. I found that I could do that, but that my templates weren’t really designed in Logic to work well with a control surface after all (that would have been true for any control surface, not just the iPad). It was also a little cumbersome to set it up and then disconnect it to do something else later, so I didn’t wind up using that way much - mostly just to get cool photos of my studio that looked more impressive! If you do want to try it, then see if your desktop DAW of choice has a first-party iPad control surface app, as those will often be the best for that DAW. Logic has one, and Cubase has one at least, though I’m not sure if they’ve been kept up-to-date.

So there are some ways to make use of an iPad in the studio (or to make your iPad a studio itself!). Hopefully that gives you some ideas about ways you might want to use an iPad. One thing I like about both the iPad and iPhone is how portable they are (even the iPad Pro fits very nicely in my backpack). So I can take them lots of places and have a way to play some kind of music almost anywhere even if I don’t have my acoustic instruments with me. I’ve mentioned in a couple other places on my website that it would be very difficult to stop me from making some kind of music!

If you do start using an iPad yourself, or if you have any other suggestions for interesting ways to use an iPad for music, please feel free to share them by contacting me!

Synthesis Techniques

Here’s a quick breakdown of the synthesis types listed above. There are other resources that go into a lot more detail. Each category also has a “Synthesizers in the wild” section where I mention some synths I’m aware of that use these methods.

A couple of useful terms to start with: An Oscillator is a sound generator that can generate a repeating waveform. Electronic oscillators usually produce some standard mathematical waveforms: A sine wave (looks like a smooth curve - you may have seen them in math class), a sawtooth wave (looks like the teeth of a saw - slopes down from a high point to a low point and then jumps up immediately to the high point, or the reverse), a pulse wave (alternates between a high point and low point without passing through points between, at least in principal - if the amount of time it spends at each point is the same, then it’s a square wave), and a triangle wave (goes from a high point to a low point and back over an equal amount of time). Noise of various forms are also common in synthesizer oscillators. These are all in reference to the shape a speaker cone makes when you play these waveforms - amplitude over time). Each waveform has a characteristic pattern of harmonics in the frequency spectrum (I have a more detailed guide to the harmonic series here). Briefly:

Sine - one harmonic at the frequency of the sine wave (harmonics are sine waves spaced at equal frequency intervals - 100 hz, 200 hz, 300 hz, etc., or 220 hz, 440 hz, 660 hz, 880 hz, etc.)

Sawtooth - every harmonic from a given fundamental frequency (the first harmonic), with an amplitude equal to the inverse of the harmonic number (if the first harmonic is at volume 1, then the second harmonic is volume 1/2, and the third is volume 1/3, etc). Very bright sound, and one of the default synthesis waveforms.

Square - Every odd harmonic (1,3,5, etc) at an amplitude the inverse of the harmonic number. Makes a more hollow sound than a sawtooth wave. The other “default” waveform.

Triangle - Every odd harmonic at an amplitude the inverse square of the harmonic number (Harmonic 1 is volume 1, Harmonic 3 is volume (1/3)^2, or 1/9, Harmonic 5 is 1/25, etc. A darker square wave sound.

Noise - comes in various forms. Random energy across the frequency spectrum, with different distributions available. TV Static is white noise, which is random energy averaging equal energy at every frequency, for a very bright noise sound.

All synthesizers will make use of two standard “modulators” or sources that can adjust a parameter value in the synth in real time: Envelopes and LFOs. An envelope traces a path between two arbitrary points, call them 0 and 1, and then takes a certain amount of time to travel to various points between them. The standard envelope is an ADSR envelope (Attack, Decay, Sustain, Release), which starts at 0, goes to 1 over the time defined by the Attack, Decays down to the Sustain value over the Decay time, holds at the Sustain value while you hold the key down, and then goes back to 0 over the time defined by the Release parameter. You can also have “multi-envelopes” which allow you a large number of break points where you can change either the slope or the direction of the envelope and get much more complex shapes, which can include repeating rhythms and generate grooves from holding down one note.

An LFO is a Low Frequency Oscillator, which generates a repeating signal usually using a standard waveshape like a sine wave, a sawtooth (ramp) wave, or a square (pulse) wave, and varies some parameter value using this repeating shape at a slow speed (below the human hearing range of 20 hz). Complex LFOs can have more complex shapes which might even be definable by the user, and the most complex LFOs and multi-envelopes can start to converge on the same types of shapes. Common parameters for envelopes and LFOs are volume (amplitude), pitch (frequency), or filter cutoff among many other possibilities.

Subtractive Synthesis: More or less the default synthesis technique today. Uses a sound with lots of harmonics as the starting point (often a sawtooth or square wave generated by an oscillator), and then reduces or removes some of them with a “filter” (most often a low pass filter which removes high harmonics while “passing” low harmonics, but can also be a high-pass to remove low harmonics, or band pass or band reject filters to keep only certain middle harmonics or remove only certain middle harmonics respectively). You will then usually modulate the filter cutoff parameter with an envelope, LFO, or performance control (velocity, a midi CC knob [see MIDI guide below]) to make the sound change over time. There are many variations on this technique now, and most synthesizers today will include at least one or two filters regardless of which type of synth they claim to be.

Synthesizers in the wild: Almost every synth these days has some kind of filter system unless it’s a very specific model of a particular synth that didn’t. I learned this technique from the ES series built into Logic Pro - ESM, ESE, ESP, and the more sophisticated ES1 and ES2 synths. Outside of Logic, one of the first synths I had for this was Massive from Native Instruments, and these days if I need a standard subtractive synth I usually turn to either Xfer Records’ Serum or Arturia’s Pigments (many of the analog modelled synths in Arturia’s V-Collection also do this well, including one of the most famous synthesizers of all time, the MiniMoog).

Additive Synthesis: Makes use of the principal that you can break any sound down into a combination of sine waves with certain frequencies, amplitudes and starting phases plus changing frequency or amplitude over time, and creates sound from sine waves in any configuration. In principal an additive synthesizer could make any sound that any other type of synthesis could make, but in practice it’s easier to use the other techniques where they are applicable. One of the early additive synthesizers was (is) the Hammond B3 organ, using tonewheels to generate basically 9 sine waves per key per manual and then setting their amplitudes with the drawbars. This is why many simple additive synthesizers sound a lot like Hammond organs! More sophisticated additive synthesizers today usually feature at least 500 sine waves.

Synthesizers in the wild: One of the most general powerful additive synthesizers today is Alchemy, built into (and exclusive to) Logic Pro, but many synths use some version of it. See the Synthesizer section above in the software guide for some examples for specific instruments. Another one I like that hides an additive synthesizer behind a subtractive facade (but with more advanced oscillators, filters, and a frequency bending function) is Native Instruments’ Razor for Reaktor, included with many of the Komplete bundles or available as a standalone (Reaktor Player) synth. The bell sound in my track “A Solemn Quest” (in the Hybrid Orchestral / Electronic list on my Music page) comes from Razor. This technique can also be used as the sound generator for a Physical Modeling synthesizer (see below), with the modeling engine calculating the values for the sine waves and then an additive engine generating them. This is more or less how Modartt Pianoteq works as far as I can tell from playing with it extensively.

FM Synthesis: Takes two or more sounds in the audible frequency spectrum and modulates (changes) the frequency of one with the output of the other. The wave being modulated is called the carrier, the wave doing the modulating is called the modulator. When they teach FM synthesis, the waves are usually sine waves to keep things simple, but you can use any sound source for either of those. Primary parameters will be the relative frequencies of the sine waves in question, and the width of the modulation, or how far around a center frequency it goes (this is called the Modulation Index). All else being equal, a higher modulation index will be brighter, and integer ratios for the frequencies (one being the same as the other, or double or triple the other, etc.) will be pitched and non-integer ratios will get further away from pitched. You can also get more “operators” involved (an “operator” being the FM term for an oscillator coupled to an amplitude envelope, and with a frequency modulation input), at which point you can configure them in a variety of ways.

Yamaha bought the patent for the original version of FM synthesis, and released the DX series of synthesizers in the early 1980s, the most popular of which was the DX7. This one had 6 sine wave operators that could be configured in 32 algorithms (modulation routings between all 6 operators). It was a notorious pain to program, being one of the first synthesizers with a 2 line LCD display instead of one knob per function, and also one of the earliest synths to have preset capabilities. Some people got really good at programming it and sold their professional preset packs to others, which basically birthed the market for synth presets we still have today. But the DX7 was an extremely popular synthesizer, and it was heard on countless records from the 80s and redefined what synthesizers could do, so it’s worth knowing about from a historical perspective as much as anything else. FM Synthesis in general is known for bright and sparkly digital sounds, and bell sounds are some of the most common sounds with this technique as it’s hard to make them with subtractive synthesis.

Synthesizers in the wild: There are a lot of DX7 clone today, now that the patent Yamaha had has expired and given the immense popularity of the DX7. I use two of them from time to time, Native Instruments’ FM8 (v2 of their FM7 synth, one of the first clones), which is a conceptual clone that also adds a bunch of other features not in the original DX7. Some were in later spinoff models from Yamaha, others just make programming it much easier with modern software GUIs compared to the old way from the original version. The other one I like is Arturia’s DX7 clone in V-Collection. Many synths today will allow some form of frequency modulation as a concept.

--

These three techniques make up what I call the “abstract techniques” because they are often based on pure mathematical waveforms (sine, square, sawtooth, etc), and deal with sound in the frequency dimension which generally seems more abstract.

--

The next two are “audio techniques”, because they deal with sound in the amplitude domain which is the way sound travels to our ears in the physical world.

Wavetable Synthesis: This technique makes use of small snippets of waveshapes in any imaginable configuration, and then repeats them at speed to make the desired pitch. You can then set up these single cycle or short cycle waveshapes in a table together, and then scan through the table to find the one you want, or scan through it in real time as part of the sound you make, so it’s famous for rapidly morphing sounds. This would be called Wave-scanning Wavetable synthesis. Several popular synthesizers today work in this manner at least as an option, including both Xfer Records Serum and Arturia Pigments noted above. They also both have filters, so they can be used for subtractive synthesis too. Another feature of many wavetable synths is the ability to adjust the shape of the waveshapes on the fly with various kinds of distortion algorithms, and then you have a waveshaping, wavescanning wavetable synthesizer (say that 5 times fast!).

Synthesizers in the wild: Wavetable synthesis became really popular again a few years ago with a new generation of modeling technology, and many synthesizers support some version of this now. I first learned this from the wavescanning feature of Logic’s ES2 synthesizer, but today I’m more likely to use Xfer Records’ Serum and Arturia’s Pigments, which both originally billed themselves as wavetable synths first among the various features they have. Massive also has a wavescanning feature. Many computer synthesizers that are good for subtractive synthesis are actually wavetable synthesizers, it’s just that one of the tables they have (or several tables they have) have standard subtractive waveforms in them (sawtooth, square, etc), and they pretty much always have at least one and often two filters.

Sampler: As noted above in the synthesizer section of the Software guide, this is a synthesizer that loads in longer audio files (more than just a few cycles at a time, usually several seconds per file), and then assigns them to be triggered via midi. You will often have some ability to edit the audio files in the sampler, and you can usually stretch them in pitch and time so that you can take one pitched audio file and spread it across the whole keyboard. This can sound really cool when you do it and play something several octaves lower than the original sound. You will also often have a variety of built-in audio effects. The most popular sampler currently is Kontakt from Native Instruments, part of the Komplete bundle. As a synthesis technique, sampling can mangle audio in various ways to make it sound less like the audio you started with. If you are just playing the audio back “unmangled”, I prefer to thing of sampling as separate from synthesis.

Synthesizers in the wild: As I’ve mentioned a couple of times in the guide, the most popular sampling software is Native Instrument’s Kontakt. Logic has a built-in sampler called EXS24, and several sample library companies have their own proprietary systems now for their products, which will usually come with their sample libraries. Most DAW-type programs that have included instruments will have at least one built-in sampler - Reason does, Cubase does, etc. Other synths may also offer at least a rudimentary version - again, Arturia’s Pigments does. Serum only sort of has a sample feature - in certain cases you can import a file and it will try to analyze it into a wavetable, with greater or lesser success depending on the file and what you want to do with it. Alchemy in Logic also has both analysis and “normal” sampling functions.

--

The final two techniques go together only in not being in the other categories.

Physical Modeling: This technique looks at the physics of how instruments work, and tries in some way to recreate that in synthesis. This is one of my favorite methods for virtual instruments trying to recreate acoustic instruments as it’s very expressive. The simplest version of this says that most instruments feature an exciter of some kind (breath being split in a flute, a buzzing reed or buzzing lips for other wind instruments, the “stick-slip” effect of the bow for string instruments or fingers displacing a string for plucked strings, etc.). This injects energy into the system, which is then dissipated by the system in some way and filtered along the way by the body of the instrument (does this sound familiar? Harmonically rich starting point that gets filtered in some way?). The filtering mechanism is more complex than just a low pass filter, and the starting sound isn’t really just a sawtooth wave, but subtractive synthesis can have similarities. A more complex version of physical modeling is component modeling, where they run physics equations for each element of the instrument interacting, and the resulting sound can be eerily like the real thing, especially in expression.

Synthesizers in the wild: Many of the synths I mention above in the software part of the guide use some version of this technique. Some of my favorites at the moment are Modartt’s Pianoteq and Organteq, Audiomodeling instruments, and Applied Acoustic Systems’ collection, especially Chromaphone.

Granular synthesis: As much an audio effect as a synthesis technique, this starts with an audio sample of some sort and chops it into snippets of a few milliseconds, which can then be reconfigured in a variety of ways (snippets played forward but triggered in reverse order, played in order but with each snippet being reversed, every other snippet or every third, etc). You can adjust on the fly how it chops up the audio (how long are snippets, what is the amplitude envelope for each one, etc). You can make the resulting audio like an edgy version of the original sound, or something completely unrecognizable.

Synthesizers in the wild: I haven’t done too much with this technique, but I have tried it a few times with Reason’s granular synth (I can’t remember off the top of my head what it’s called - Grain?), and there are some interesting iPad apps I used a few years ago. I know at least one of the synths I do use a fair amount does it, but I don’t use that part of it so I can’t remember which. I think it might be part of Arturia’s Pigments.

The Everything Synth

This concept really applies to most synthesizers today, particularly in the software world. The above listed methods are most of the primary methods of sound synthesis, but as I mentioned in the “Synthesizers in the wild” bits several times, most actual synthesizers combine multiple methods which allows for greater variety in the types of sounds that can be generated. The most common additional method added to most synthesizers is subtractive synthesis. Unless the synth in question is a re-creation of a very specific synthesizer in great detail (and sometimes even then), most of them will have at least one or two filters so that you can shape the output of whatever method the synth claims that it is. Wavetable synthesis is another one that many synths can do - most of the “subtractive” synthesizers these days will actually be wavetable synthesizers which have a table of “basic waveshapes” so that you can do standard analog-style subtractive synthesis with sawtooth and square waves (at least). Additive is also generally mixed with other methods now, so that you don’t have to mess with the low level additive parameters yourself, a process that gets very cumbersome very quickly.

But some of the bigger plugins will combine several methods in one for a real “everything” feel - Arturia Pigments, Xfer Records Serum and Alchemy (built into Logic Pro) are all synths that come to my mind for this approach - Pigments features wavetable, additive, a light form of sampling, and granular synthesis at the oscillator stage (with two oscillators that you can use simultaneously), coupled with a dual filter system with a variety of filter types for subtractive synthesis, and an extensive modulation routing scheme including some unusual modulators (like a “random” generator). Serum has two wavetable oscillators, with the option to use a version of additive synthesis to generate “key frames” of a wavetable which you can then spectrally morph to generate a scannable wavetable. It can also analyze samples of a certain type and attempt to re-create that in a wavetable format, though it’s not really the same as sampling. Again, it includes two filters for subtractive synthesis, and a mode to frequency modulate one oscillator with the other for FM. Alchemy includes several methods, including a wavetable system, full low-level support for additive along with an “Additive Resynthesizer” which can analyze a sample and try to recreate it in additve format (which tends to work better than the wavetable version in Serum, in my opinion, at least for pitched sounds), as well as an actual sample playback engine. It includes multiple filters, several modulators, and then you can create 8 different sounds and morph them with a form of XY grid. I was playing with some of the presets the other day and it’s easy to get lost just doing that and spend a long time on that!

I also recently picked up the latest version of Arturia’s V-Collection (version X as I write this), and one of the new synths in this version is MiniFreak V, which is its own kind of everything synth - it has 21 different synthesis models split between two oscillators with some overlap, which cover a wide variety of synthesis techniques. Some are variants on a theme, so it’s not really 21 different types of synthesis - most of them can be slotted into one of the types I mentioned above. It only has one standalone filter, though one of the osc 2 options is to have it filter osc 1’s signal with another filter. It was addictive playing with that recently too. I also got into the new SynthX plug-in that came with the collection (I think it’s not officially part of V-Collection X, but they were doing a promotion that threw it in as well). I haven’t explored that one that much yet, but it was fun to play with.

Another thing that both MiniFreak V and SynthX V do that is worth noting is that they include a built-in sequencer for making rhythm patterns out of holding down single notes. Versions of this have been done for a while (Korg’s Wavestation synthesizer was an early synth that did that, and there were even step sequencers as far back as Moog Modular systems, though they weren’t as complex), but it’s easier and more powerful to work with it in a graphic format in a software synthesizer. All of these synths also come with a set of built-in effects, which you can either use as is or use external plug-in (or even hardware) effects. One benefit to using them in the synth is that in some cases you can modulate effects parameters with the synth modulators. This is also one benefit of a DAW like Bitwig Studio - it has “plug-in” modulators, so you can use envelopes, LFOs, or performance controls (or sometimes other sources) to modulate any parameter than can be automated in the DAW, allowing you to modulate parameters of third-party plug-ins that are not built into any give synthesizer. As far as I know Logic doesn’t yet have anything like that, though sometimes you can get 3rd party effects that allow similar options.

MIDI

This is a rather big topic, so I’ll try to cover the basics briefly but I have entire books, video courses and even college classes I’ve taken in what MIDI is and how it works, so there’s a lot more where this comes from. I mention a couple of other resources at the end of this section.

MIDI is an acronym that stands for Musical Instrument Digital Interface. It is a computer protocol that allows computers to exchange data between them in a compatible, open standard rather than having to use any particular company’s standard for interfacing with products. It allows data about a musical performance to be transmitted between devices, between hardware or software, and between performance controllers and synthesizers or other sound generators. It has also been adapted for use in other fields, such as lighting boards in performance venues. It was first released in 1983, and has been one of the most successful open standards over the past 40 years. It has led to the modern plethora of electronic musical instruments from a wide variety of manufacturers that we enjoy today.

The first thing to keep in mind is that MIDI has nothing to do with audio or sound. It is more like notation than actual sound, in that it represents instructions for making music rather than being the music itself (see Life Tips General Musicianship Tip No. 2). It has been extended and expanded over the years with various additional protocols to be more powerful. If you think about how much computers have changed since 1983, it’s really remarkable that this standard still exists as much as it does today.

MIDI 2.0 was released a few years ago at this point (writing in early 2024), and support is being rolled out slowly for it. But the main version of MIDI today is still based on an extended version of MIDI 1.0. It is now able to be transmitted via a USB cable rather than the original 5-pin DIN connector, though you will still find jacks for 5-pin cables on many devices today as well. But in my studio all my keyboards and wind controllers use USB.

The standard itself supports a variety of message types, of which the most common are Note On, Note Off, and Continuous Control (sometimes called Control Change, both abbriviated as CC). There are 5 other types as well, including Program Change for preset selection, Pitch Bend, Channel Aftertouch, Poly Aftertouch, and System Exclusive (SysEx).

Message Types:

Note On and Note Off are used to indicate that a note has started and stopped sounding respectively (often that a key has been pressed or released, but not every MIDI instrument is a keyboard).

CC messages will allow for relatively fine grained control over a continous parameter such as volume, vibrato depth or speed, or filter cutoff frequency. Some of them can be used as switches too, and you can use a switch sustain pedal (either pedal is up or pedal is down), or these days continuous sustain pedals are getting more common for partial pedaling in classical piano and other styles. You have 128 possible MIDI CCs.

Program Change is used to select which preset you are currently using.

Pitch bend will allow you to continuously bend the pitch like a guitar string bend, or “lipping” a pitch on a wind instrument to get it in tune or slide into or out of a note. String glissando effects are also possible with pitch bend, though most of the time you will use it over a fairly narrow range - say a whole step. Most synthesizers will allow you to select larger intervals, but they may be harder to control.

Aftertouch refers to adding pressure to a key after you are holding it down. The more pressure you apply, the higher the value from 0 (no extra pressure) to 127 (max readable pressure). Channel aftertouch will apply equally to every key being held down at once regardless of which key is being pressed harder, while Poly Aftertouch will read each key individually for aftertouch.

System Exclusive was included so that manufacturers could add features that weren’t available through the base specification. These days I don’t see them used that much, but some older instruments make use of them.

You will notice that I mentioned 128 values a couple of times there. This is a critical number in MIDI - 7 bits of computer data. You might be wondering if you know about computer programming why it’s not 8 bits, or 1 byte. Actually it is, but the first bit is a switch that tells the system whether the following 7 bits should be read as a “Control Byte” or a “Data Byte”, so there’s only 7 bits of usable data. Control Bytes tell the system what the following Data Bytes are referring to. A Control Byte will use 3 bits to set which of the 8 messages is being triggered, and the last 4 bits will point to one of 16 available channels. The following Data Byte or Bytes will use all 7 bits for data, allowing 128 possible values, usually expressed as 0 to 127.

Note Messages (both On and Off) will have two data bytes following - the first gives you the note number out of 128 possible notes. By convention, MIDI note 60 is middle C, and then each number above and below that is respectively a half-step higher or lower (so 72 is the C an octave above middle C, 48 is an octave below, etc). If you start using other tuning systems then this can be modified depending on exactly which system you use, but this is true if you are using some version of our 12 note per octave system we generally use today.

The second data byte of a Note On or Off message is for note velocity, and reflects MIDI’s origins in the world of keyboards. Velocity is the speed at which a key on a keyboard is depressed, which usually translates into how loud a sound is as if the sound were a piano. You can also think of this as how hard you hit the key, but it’s actually better to think of it as how fast you hit the key, as this will reduce tension in your playing, something discussed all the time in piano technique.

MIDI CC Messages will have two data bytes specifying the CC number (0 to 127) followed by the new value it should take (from 0 to 127).

You can have up to 128 Program Changes per channel, and there will be only one data byte - the number of the new program to change to.

Pitch Bend used two data bytes together for 14 bits of resolution giving you much finer control over the precise pitch compared to only 128 different possible values.

Channel Aftertouch only sends one data byte with the new value of the aftertouch setting. Poly Aftertouch sends two bytes, one for the note in question and one for the new value.

System Exclusive will vary based on what they are using it for.

How to use MIDI

So how does all this affect a practicing musician vs. someone making the instruments? The biggest thing is de-coupling the playing mechanism from the sound source. Most acoustic instruments come with their own playing mechanism and you can’t readily swap in completely different types of playing mechanisms. Usually you can’t decide to blow into a violin, or to pluck or bow a clarinet. You can theoretically swap the action in a keyboard instrument, but in practice that’s not something people usually do. But with electronic instruments using MIDI, you can use different performance controllers for the same synthesizer, and even more you can play several synthesizers with the same control mechanism. I have my primary controller keyboard and that’s how I play most of my sounds in my studio setup, from a variety of sound generators. But I also have small portable keyboards I can use with my setup, and I have a wind controller, and even my iPad can be set up to generate midi data from apps that could be used to control any given synth. Some of my tracks have featured the wind controller being used on electronic sounds in synths like Serum to give a different kind of motion to my sounds compared to the usual velocity, modwheel or envelope / LFO (see Life Tips Composition Tip No. 23 for more about motion in synth patches).

This type of system is also what allows one person to make large, multi-instrument tracks like orchestral mockups or electronic soundscapes. You don’t need a large number of people to get a sound like a large number of people, and with another nice trick of MIDI, you can pretend you play all the instruments even if you have no idea how to make a sound on an actual horn or trumpet, for example. The trick I’m referring to is the fact that MIDI is a control protocol, so you can change the tempo and record your parts at half speed, and then speed them up to full speed with no loss in quality like you’d have in audio recording actual acoustic instruments. The one thing you will need to be careful of if you do this is that you phrase things differently at a slow tempo than a fast tempo - most of the time you will hold a note for more of its full length if you are slower than if you are faster, so if you fail to account for that when recording slowly, the full tempo version will sound overly legato. You need to play it like it’s fast, but slowly, and this is harder than you might think. It’s still easier than getting that good at the actual acoustic instrument though.

One more thing to keep in mind is that you can make MIDI instruments do things that are impossible for the acoustic instruments so you need to be careful about learning to write for acoustic instruments using samples or synthesizers (see Life Tips Composition Tip No. 1+2 for more on that). Notation software also uses MIDI to communicate with playback software like Note Performer or various other sample libraries you might use, and notation software in particular can play tricks on you. But even in a sequencer, and even with a wind controller (for wind parts), there are still traps you’ll fall into if you are not careful.

One easy one to fall into is mixing poorly - it’s really easy to write in a way that won’t work in a live ensemble and then crank a track in the mix to fix it, which is not available in a concert environment much of the time (this even gets professionals who were writing before computers - see Ex. 1 below (click on it to open it)). You can mic instruments in a concert hall, but you should try to write in a way that you don’t need to mic them for balance.

Another common issue I’ve found both in general and in my own writing is misusing the low instruments, particularly low brass. I’ve often found that tuba and bass trombone sound really powerful in the lowest part of their range in samples / modeling synths, but in their acoustic counterparts that range is often more flabby than powerful, and writing an octave higher sounds much thicker with the acoustic instruments than with samples. So if you are writing for live players, maybe think about writing passages an octave higher for those two in particular than you think you need to from the computer sounds. There’s a place for the deep bass of the lowest notes of the tuba, but it may not be as often as you’d think from electronic sounds. This may also affect contrabassoon as well - you don’t need it as often as you think you do in an acoustic orchestra, and it’s a really noticeable presence when you use it. I’m not saying don’t use these, but try to learn what they sound like acoustically if you are writing a piece that will eventually be played by live players, and be ready to make some tweaks if you use those ranges / instruments.

If you are writing straight to a recording without going through live players, then you can write anything that sounds good on the computer, but even there things will often sound better if you write in a way that could work in a hall if you are writing for what would be an acoustic ensemble.

Finally, one other thing that often trips up students when they first get into computer studios and MIDI is that MIDI is not audio, and vice versa. You can’t convert a midi track to an audio track - you have to record the output of a synthesizer or virtual instrument that is receiving the midi data. You can have different synths interpret the same data and it will sound very different. In some sequencers, they might have an option that looks like converting the midi data to audio (often called “bouncing” - Logic Pro has an option to “Bounce In Place”, for example), but under the hood it is recording the audio output of the synthesizer to an audio track.

There are programs now that try to scan an audio track and transcribe it automatically to MIDI, and sometimes they work well and sometimes they don’t. They generally work better on monophonic, clean audio with only one instrument playing - chordal instruments like guitar and piano often give them trouble, and I’m not aware of any program that can transcribe an orchestra recording to a score automatically. Most people can’t do that very well, though I know a couple of people who claim to be able to do that. I can do that to some degree depending on how complicated the score is and how good the recording was, but that’s extremely difficult in general and well outside the expected level of any college musicianship or ear training class. The ability to listen to music and recognize timbres of instruments and the kinds of textures being used is very useful, the ability to write out a fully accurate orchestral score just from listening to a piece is less necessary. By the way, the reverse of this - looking at a score and hearing as much as possible of what it sounds like in your head (also called “audiation”) - is a useful skill to develop. I don’t know how many people get to where they can just glance at an orchestral score and hear it fully produced from that, but getting to where you can at least piece it together by looking at it can help, and the ability to read a single part and hear it more or less fully fleshed out is doable once you get good at general ear training and timbre recognition. This goes for other kinds of music too - hearing as much as you can of a piano piece from reading the sheet music is helpful, and having some idea in other genres of what an arrangement could sound like from a lead sheet would be good too.

MIDI is very useful in the music studio, and it is a good idea to become extremely familiar with how it works. Several books I’ve used have helped me with that, including chapters in both “Understanding Audio” by Daniel M. Thompson, and “Sound Synthesis and Sampling” by Martin Russ, referenced above (in the Hardware Training and Software Training sections, respectively). I also like the video series “MIDI Demystified” from MacProVideo. After that, it’s a matter of playing with it a lot and practicing - treat your studio like it’s a real musical instrument you have to practice like any other, and it will greatly reward you!

  • In Dvorak's New World Symphony (No. 9 in E minor under the system I learned), he writes a passage for flute that is very difficult to pull off successfully. Here' s a cued video of the passage (score video).

    This is from the second theme section of the first movement, and the flute in its lowest register states the theme. It's marked piano in the flute part, but both of the times I've performed this, the player really has to play a strong forte to have a chance at being heard, and the strings that are accompanying have to play as softly as humanly possible. We have to play on the very edge of the bow, over the fingerboard (sul tasto), and we're already using mutes. Sometimes the conductor even has to resort to having only half of the section play in order for the flute player to be heard sufficiently. This passage is really not well orchestrated in that regard. If you can make it work, the sound is lovely, but as written it doesn't work without a lot of additional tricks being added to help the less than ideal orchestration.

    I've never seen anyone mic the flute, and again, that isn't really the way we think in a classical orchestra - most of the time we'll allow a sound to be buried in the "mix" before we mic an instrument for balance in a hall, unless the piece being performed is a contemporary styles sort of piece (rock, jazz, pops, etc) where that sort of thing might be more accepted. In a classical context like the New World Symphony, we'll use as many acoustic tricks as we can think of but not micing the instrument.