Capital-M Music

Studio Guide

by Richard Bruner

I’ve put together a guide with some things to think about when building out your personal or professional music studio. This is a major task for a composer, producer or recording artist / engineer, and will be something you’ll work on for pretty much the rest of your life as a musician. I’ve got several components to this guide: a description of several types of music studios that you might want to build, some thoughts on how to get started with this process, then a description of the different types of equipment and software you will need to get at some point along with what I use for those categories at the moment, and finally some extra mini-guides at the end that describe some of the more common synthesis techniques and a quick crash course in MIDI. Both of these are very big topics that people have written thick books about, so I can only cover the basics here, but I’ll recommend some books if you want to follow up for more information.

This guide goes along with two other guides I’ve put together - one for my overall musical philosophy, “Capital-M Music”, and a second “Life Tips” guide that relates some of my experience in the wide world of Music in areas ranging from being a gigging performer, to composition, to general musicianship. I will also cross reference these other documents below as relevant to what I’m discussing here.

Studio Categories

Starting a Studio

Equipment Descriptions: Hardware

Equipment Descriptions: Software

Equipment Descriptions: Extras

Bonus Guide: Synthesis Techniques

Bonus Guide: MIDI

First, let me give a quick breakdown of common studio types. Many studios combine some or all of these elements and are what you might call project studios, usually owned by a single person and used primarily for their projects, either personal or commercial (for clients). Many of these are at the person’s house, so they can also be called home studios, but either way, it’s still useful to think about types of studios.

Studio Categories

Composition Studio: A studio whose primary output will be sheet music, either for a recording session or for live performance in a concert music environment or a “band” environment. The primary tool will be notation software, and all you really need is a laptop or even an iPad and notation software. It would also be useful to have some type of MIDI keyboard, and for your main desktop setup I’d recommend at least 61-keys. Audio output will be secondary, just to get a basic mockup or to check for errors in the notes and rhythms you write. This setup will be used by concert composers and by music copyists, whether for concert music or media projects.

Electronic Production Studio: A studio whose primary output will be an audio file. Often entirely electronic, this is a studio for various forms of electronic dance music, but also for making higher-end mockups for film projects (some composers hire people to do that for them), and with a simple audio interface you can record some live elements as well for pop songs and other similar styles. The primary tool will be the DAW (digital audio workstation), and you will need some kind of MIDI input device, probably a keyboard. Again, at least 61 keys for your main setup, and 25 or 37 for a portable setup. Another popular midi device for this kind of studio is some sort of beat controller, like an AKAI MPC or Native Instruments Maschine, particularly for electronic dance beat production or hip-hop beat production. You will also have more need for an extensive computer-based or external synthesizer collection, and this is probably where the bulk of your ongoing spending will be. The playback environment is also more critical here, as you will want to be able to mix your projects effectively, so studio monitor speakers will be an element you’ll need to consider, along with ideally getting your mixing environment acoustically treated. Most likely to be used by a composer who calls themselves a “producer”, or by a mock-up artist.

Film Scoring Studio (or today perhaps more accurately “media composition studio”, as most commercial composers work in several types of media including film and tv but also video game scores and commercials (tv or internet advertising)): Combines both of the above, and also adds the ability to score to picture. Most of the main DAW programs and even most of the main notation programs offer extensive support for scoring to picture now, so it’s more a matter of learning how to do that than it is a matter of being able to do that. Familiarity with stylistic tropes and a sense of how dramatic music works (meaning music in drama, not necessarily big and bombastic though it can be!) will be more critical here, and that’s a whole other area of training outside the scope of my capital-M Music project. Most likely to be used by a media composer, of course! Orchestrators might also like a set up like this, and really even outside of film scoring this is going to be the most versatile general home project studio.

Recording Studio: This one is the likeliest to be outside a person’s home. Electronic studios and film scoring studios (and more advanced composition studios) will all have some ability to record, but you may find that the time comes when you need to do more than you can or at a higher quality than you can do at home, so you’ll want to go to a professional recording studio. Or you may want to run one. Here, you’ll need to set up an extensive live and mixing room (two separate rooms), and your mic locker (what microphones you have) will need to be quite extensive. You’ll also want either a mixing board or a large format audio interface (or several) with lots of inputs - at least 24 if not 48 or more, and a very nice studio monitor speaker setup. The software tool will almost certainly be Avid Pro Tools for this setup, and you’ll need to get very good at using it if you want to run a recording studio. Most likely to be hired by any of the other types, or run by an “audio engineer”, or sometimes a “recording engineer” or “mix engineer”.

Thoughts on Starting to Build a Studio

This is an ongoing process you will probably be engaging in for the rest of your life in music once you start building your studio. It can be daunting to get into because of how much equipment it seems like you need and how much a lot of it costs. Also, there are websites and trade publications that feature really high-end professional studios, and comparing what you have to them, it can feel like you’ll never have enough. But you don’t need that much equipment to start with, and most of those people didn’t build their studios overnight. You start with a modest studio and then add on and upgrade your gear over time after that. Keep in mind that once you are making money with your studio you can theoretically justify spending more money on it as a business investment. As a student or as an amateur it is harder to justify thousands of dollars all at once without some way to make a return on that.

My story:

I got my first real studio setup at the end of 8th grade going into high school (this was in 2004), and it took the form that I have labeled as a “composition studio” above, namely a laptop computer and Sibelius 3 notation software. I used a kid’s keyboard we had got for me a few years prior until I was able to get a 61-key midi controller keyboard a couple of years later, the M-Audio Axiom 61. I got a couple of programs for sequencing over that period, and we started adding elements bit by bit - a sample library here, a microphone and a small mixing board so I could record myself, and by the end of high school I had a modest but usable personal project studio. When I went to Berklee, they required us to get a MacBook Pro and a software suite that we bought through the school, and I kept building my studio up slowly. At this point, I have been building my studio for about twenty years, and I’d say I now have a reasonably decent midlevel professional film scoring studio, which can also be used for many other types of music composition and production. There are still plenty of aspects of it that I’d like to upgrade at some point, but it works for me at the moment.

Getting Started Today:

If you are starting out today, here’s what I’d recommend to get started. You probably already have a computer, and you can get started with whatever you are using now for that. If it’s a Mac, then it has Garageband built in, which is an adequate DAW to start with. On PC, there are other free DAWs which I haven’t tried, or you can get lite versions of Cubase among other programs. If you want to try notation, my favorite free program is Musescore, or look into getting a lite version of Dorico or Sibelius, which will be cheaper than the full “pro” or “ultimate” version. It will be limited, but you might be able to start that way and upgrade when you are ready to do so. If you are a student, take advantage of student discounts for software while you are still in school.

There are various free or relatively inexpensive synthesizers and sample libraries to start with in that world. Eventually you will want to upgrade to commercial grade synthesizers and libraries, but you don’t have to buy them all at once. This is an area that you will keep buying pretty much forever. I have a set up that works for me now, but I still get new synths and libraries from time to time when they seem intriguing or when I have some extra money or when a particular need comes up in a project I’m working on. One thing to consider is what comes built-in to your DAW software. Logic Pro has the best overall suite of included synths and effects, though if you are writing for orchestra, I’d still recommend something else over the included orchestral sounds. But for electronic music, Logic is excellent as a starting point. One of my favorite synths in Logic is Alchemy, which holds its own even against the 3rd party commercial synthesizers. Logic is also relatively inexpensive compared to its professional level competition. With most other DAWs, I’d recommend mostly getting 3rd party synthesizers and libraries sooner, some of which I mention below.

Hardware-wise, once you have a computer I’d start with a midi keyboard of some sort, and you’ll want an audio interface and ideally real studio speakers as soon as you can get them depending on what your studio goals are. You can start with computer speakers (external - built-in speakers will never be adequate for this work, even as nice as the speakers on my MacBook Pro are now), but they won’t be as useful for mixing as studio monitors. Headphones are another option as a stopgap, but there are other problems with mixing on standard studio headphones and you are better off with speakers as soon as practical. You will want headphones as well for recording yourself or others if nothing else, but mixing on them should be a last resort.

If your goals are primarily to write for acoustic instruments, then you really may not need much more than I mentioned above in the “Composition Studio” section. A laptop or desktop computer (or an iPad), notation software, maybe Wallander Note Performer for playback rather than the built-in sounds for those programs (note that it only works with Finale, Sibelius or Dorico and only on the computer, not the iPad), and a 61-key midi keyboard will get the job done pretty well for composing for acoustic instruments. You don’t even really need the keyboard, but I find it helpful to have one in front of or next to me while I am writing just as a guide to figure certain things out, and it can be handy as an input device in notation software as well. As I mention in Life Tips Composition Tip No. 7 and in the Multi-Instrumentalist section of my main paper, keyboard skills are essential for composers, and this is one reason why I say that.

I would also think about writing on paper (or handwriting on an iPad) with your acoustic instruments or even no instruments, either singing or just hearing music in your head and trying to write it down from that, at least as an exercise now and then to build or maintain that ability so that you are not completely dependent on your computer to compose. I have a long section about that in the middle of the Musicianship portion of my main paper, but as a Musician it is good to be able to compose on your own terms. Writing on the computer can lead to writing in a rut and not thinking about things that are difficult to enter into the software, which is really aimed at pretty mainstream writing. I do most of my composing by improvising on my instruments and then working into the computer directly, but it doesn’t hurt to try it by hand now and then for variety. Some people even prefer to work that way. If you are into extended techniques or other so-called “New Music” elements, writing by hand at first will be almost necessary as the software doesn’t tend to work well with those kinds of things. You can do them in notation software but it’s best to enter them in the computer after you’ve composed them rather than trying to compose and fight the software simultaneously. Even if you do make music that works well in notation software, you’ll want to learn the software thoroughly as it is very frustrating to compose and struggle with the software at the same time. Having to break out of your composition mindset to look up a function of the software can make you forget what you were trying to write in the first place, so the less you have to do that the better off you’ll be. Obviously while you are first learning the software there will be a lot of that, but the sooner you can get past that stage the easier it will be to go back to composing!

One other tip for composing here - have some kind of recording device with you all the time. If you have a smartphone, the built-in Voice Memos app (or whatever it is called on your phone) works perfectly for this. You can be struck with ideas at any time, and if you can just whip out your phone and hum them into it, you can salvage a lot of ideas that would otherwise be lost to the ether. I can’t tell you how many times I would be in a practice room at Berklee and get some cool idea, and then have to run down the hall to my dorm room where my laptop was so I could try to put it down. Many potentially interesting ideas were lost in the two minutes it took to run down the hall, turn on the computer, and realize I’d forgotten my idea. If I’d had my handheld recorder, or today my smartphone, I could have just recorded a scratch recording on the spot in the practice room and then transcribed it into the computer when I got back. I also like having the Dorico app on my iPad now. There have been a couple of times that I woke up with a new tune running through my head from my dream, and I was able to roll over, grab the iPad and write it down within seconds. I named them “Dreaming in Tune No. 1 and No. 2”!

On to the product categories guide:

Hardware

Computers: Use whatever kind of computer you are comfortable with - there are endless flame wars online about whether Mac or PC is better (or even Linux), and really it comes down to your preference. I’ve used both Mac and PC. I used PCs growing up, used Mac for about 10 years between Berklee and the first few years after that, switched back to PC in 2018 and most recently back to Mac in 2023. I think the current Mac computers are the best available for music production, but boy do they make you pay for it! You’ll want a computer with at least 32 GB of RAM, a 1 TB SSD drive (or more), and a decent CPU. Probably anything in the Apple M series will be fine, but I’d recommend at least an M(x) Pro chip. My MacBook Pro has the M2 Max, which is probably overkill, but it’s a great computer! For PC, you want at least an i7 level Intel chip for the CPU. 

Either way, get at least a backup hard drive for your system drive, and another external SSD (or a second internal SSD if you have a desktop that can do that) which will be for your sample libraries. Capacity-wise, the sample drive should probably be at least 2TB today if not more, especially depending on how many sample libraries you think you’ll get; and the backup drive (which can be either a solid-state drive or a hard-disk drive) should be at least double the capacity of your primary drive up to 1TB for the primary drive. If you have a 2TB primary drive, you can probably get away with a 2TB backup because it will take you a long time to fill up that drive. You should have at least as much capacity for your backup drive as the primary drive.

Keyboard (Midi): I’ve liked M-Audio keyboards for most of my studio keyboards. I currently have an M-Audio Code 61 for my main studio controller keyboard. When I first came out to Los Angeles, I got a Novation Impulse keyboard, which was fine too, but I switched back to M-Audio for my current setup. I also like Arturia keyboards, and my small keyboards are both Arturia - a 2 octave (25-key) MiniLab, and a 3 octave KeyStep 37. This would be my ideal travel keyboard if it fit in my backpack. I do bring it when I travel with a suitcase, but I bring the MiniLab when I just have the backpack.

88-key keyboards will depend on what you want to do with them. Digital Pianos (for home or gig performance) may be different from studio controller use. I’ve loved Roland keyboards for a while - I think they have some of the best relatively compact hammer-action key action setups with the PHA-4 and PHA-50 actions. My home performance keyboard setup (which is what I practice and record piano on) is quite complex now, but the heart of it is a Roland FP-90 digital piano, which has the excellent PHA-50 action in it. I couple it with Modartt Pianoteq running on a Mac Mini for the primary sound source and it’s amazing! I’ve often found other weighted keybeds to be too heavy for my taste (particularly from Yamaha), though they can work for some people. Kawai is another high-end brand for digital pianos.

For a primary studio controller, I’d go with at least 61 keys at home, which translates into 5 octaves of keys (the formula is subtract 1 from the total keys, then divide by 12. 25 keys is 2 octaves, 37 is 3, 49 is 4, 61 is 5, then they jump to piano style, so 88 is a full size piano keyboard (a little over 7 octaves), and 76 is one octave less than that). I would also recommend getting one that has a modwheel, pitchbend wheel (ideally separate from the modwheel), and sliders and knobs. Some also have finger drum / trigger pads, and channel aftertouch which means that you can press the keys harder after you play them to send data about how hard you pressed them. All will have a sustain pedal input, many will have an expression pedal input (a continuous pedal that rocks back and forth to send data, used in organs as a volume pedal or on a guitar pedal board for a “wah” pedal among other things). If you can get one that has a continuous sustain pedal input, that’s helpful for playing acoustic piano effectively, but that’s more common on digital pianos than on controller keyboards. This is obviously too many controls to manipulate at once, but you can record things in multiple passes, so you’d have one pass for notes, one for modwheel expression (which is how many virtual instruments control volume expression), and then any other parameters you need to record. Also remember that when inputting many kinds of lines, you only need one hand (for flute parts for example if you are not using a wind controller for those), so you’d have the other hand free for something else on that note pass. It will take some coordination, and some practice time to get good at this - it’s an instrument like any other!


Wind Controller: I don’t have any current recommendations here as the one I’ve been using for a long time (since 2010) is no longer available, but I have the Akai EWI-USB. Unfortunately it’s beginning to have some gremlins and not work very well anymore, and I don’t have a good replacement as the cheaper wind controllers don’t seem to be as powerful anymore and the good ones start at $500 and go up rapidly from there. I do like the one I have when it works though, and I would recommend getting a wind controller in principal for producing mockups with wind parts, or even for certain kinds of electronic synthesis.

A wind controller will usually look somewhat clarinet-like, and will both translate breath pressure into MIDI breath control (MIDI CC 2), and notes into MIDI note values based on metal contacts you are covering with your fingers like the holes or keys on a wind instrument. You can also get Breath Controllers, which are headsets with a mouthpiece that translates breath pressure, but no instrument body. Instead you use it with your keyboard for note input. These are less common than wind controllers these days, but they can also work well for some people. Both of these devices are useful for playing wind parts on a mockup because they give you some of the natural expression that players of acoustic wind instruments would use which are harder to mimic with a modwheel, slider or foot pedal on a keyboard.

Audio Interface: I’ve been using Focusrite Scarlett interfaces for the past several years. I used to really want MOTU interfaces, and I still like them in principle but I haven’t paid that much attention to them recently. I also like the Arturia interfaces, and Apogee has several that are nice (or at least they did, but again I haven’t been paying that much attention to this space in the past few years).

An audio interface is a device for getting audio into and out of your computer. It’s a box that plugs into your computer, usually with USB these days, and then has audio out ports for studio monitor speakers and one or two headphone jacks, and then usually at least two and sometimes four or even eight inputs for microphones (XLR) and electric instruments (1/4”), usually combined into one port for both of those. Some of them will have additional ports, like 5-pin DIN MIDI connections, or digital audio in/out in various forms. The main thing you are choosing is how many inputs you want, and then what kind of mic preamp they provide - more expensive will usually be better, of course, but even fairly inexpensive interfaces have pretty good preamps these days. Let’s put it this way - if you are at the stage where you are concerned about the price of your audio interface, the quality of its mic preamps isn’t likely to be the weak link in the quality of your resulting audio!

Studio Monitor Speakers: This is an important category - this is how you will hear the music you are working on, so it pays to get this right. This is not the same as a set of hi-fi speakers or an audiophile listening environment. The primary quality of studio monitors should be transparency, or you hear what the sound actually is without additional enhancement from the speaker. You want a roughly flat frequency response between approximately 20 hz to 20 khz (the idealized human hearing range).

There are a variety of different types of speakers. These days, most home studios will work with powered near-field monitors, which means that they have a built-in amplifier and connect directly to your audio interface, and they sit fairly close to your listening position when in use which reduces the impact that room acoustics will have on your sound, though if you can get your room treated acoustically it will still help.

I’m currently using a pair of Adam Audio T7V speakers for my setup. There’s a lot more to say for this category, which you can find elsewhere online if you are looking to buy speakers, but I’ll leave that there for now.

Microphones: This is an area that you will probably never “finish” collecting. Depending on your musical goals you may or may not need much in the way of microphones. If you are primarily interested in electronic music then you don’t necessarily need microphones, but it might still be useful to have one or two for times when you do want to add in acoustic instruments or vocals. There are several types of microphones in wide use - Condenser (large and small diaphragm), dynamic, ribbon, and contact mics are all common in a studio environment, and they each have strengths and weaknesses.

If you only want one mic, or when you are just starting out, get a large diaphragm condenser microphone. These are the most versatile and have the cleanest and fullest sound. Common for vocals and strings in particular once you have a collection, but they work on most instruments. Dynamic microphones are common for high amplitude (volume) instruments like electric guitar amps, drums, and other similar instruments. They are also common for live sound applications for stage use, because they can be handled with less noise, so if you see a singer holding a mic, it’s probably dynamic. The Shure SM57 and SM58 are the quintessential dynamic microphones. Ribbon mics are only going to be found in studio use because they are very fragile, but they have a warm sound for vocals and other similar uses. Contact mics are a special type of mic that you put on a wall or other large surface and usually they are used in conjunction with other types to add to the overall sound.

I have several mics at this point, mostly large diaphragm condensers, but also one small diaphragm condenser and a couple of dynamic mics. I mostly record myself and have never recorded more than one person at a time in my studio, so that’s been enough for me so far.

Another related area you might want to look into is field recorders - these are small devices with built-in microphones that can record to internal media, so they are useful when you are recording “out in the field” (making a recording of a live concert or even just recording sounds out in the wild for sampling purposes or something similar), as there is no need for a computer on-site. Just make sure you have enough batteries! I have a Zoom H2 recorder I use when I need a high quality recording, or if I just need a “scratch recording” I can even use my phone. The Voice Memos app on most smartphones is great for that sort of thing, but as good as the mics are now on smartphones, they are not as good as on dedicated field recorders like the H2.

Hardware Training (Audio Theory):

This would be a good place to mention that the book that I used to learn the fundamentals of audio production and studio setup is Berklee Press’s “Understanding Audio” by Daniel M. Thompson. This was the book Berklee recommended as a reference for the music technology placement test when I went there, and I had already read it cover to cover at least once by that point and nearly aced that test. Note that I used the first edition, but the current version is the second edition which is linked above. Most of the first edition is even still relevant today - the fundamentals generally don’t change rapidly!

Software

DAW (Primary): One of the most important decisions you’ll make for your studio is which DAW (Digital Audio Workstation) to get. This is the program you will be using to generate high-end audio output for your midi-sequenced or audio-recorded projects. I’d recommend getting a couple as they can be good at different things. I’m currently mostly using Apple Logic Pro, which I started using at Berklee with Logic Studio / Logic Pro 8, and now I’m on Logic Pro X (which has been the version number for about a decade at this point). Logic is only available for Macs, so when I switched back to PC I also switched to Cubase Pro, which is also great. I might use that on my Mac once I upgrade to a newer version than I have right now and no longer need the USB license key. 

Honorable mention will go to MOTU Digital Performer, which I used as a film scoring student at Berklee. At the time, it had some of the most powerful tempo mapping features for film score cues, and I made extensive use of it for those classes. I never really liked the graphic interface as much as Logic though, and found it kind of fiddly to work in, so when I came out to LA, I switched back to Logic as my main DAW. If you are primarily working with recording studio sessions (as opposed to music production or composition / film scoring projects), then you should seriously consider Avid Pro Tools, which is the main industry standard for that world.


DAW (Secondary): Once you have your primary DAW, you might want a secondary one too for electronic production. If that’s mainly what you do this might even be your primary DAW. Ableton Live is the go to for that world, but I personally prefer Bitwig Studio for Electronic production. Logic Pro also has a lot of good features for that side of things too, but Bitwig or Ableton are more focused on that sort of thing. Propellerhead Reason Studio is another one to look at, and it comes with many good synthesizers built-in (as does Logic). Bitwig comes with a whole synthesizer-building environment in the form of The Grid. I don’t know that much about Ableton, not really ever having used it myself. I tried it a few times over the years but could never really get into it.


Notation Software: The other primary software program you will be using in your studio for composition / film scoring projects is your notation program. If you do projects that need notation, I’d really recommend getting a notation program and not trying to use the built-in notation editor in your DAW.

My current program of choice is Dorico Pro from Steinberg. I got in on this a couple of months after v1 launched, when it was still very rough around the edges, but with v5 currently I’d say it’s a fairly mature program at this point, and really feels like the only one of the “Big 3” to be innovating these days. The Big 3 are Finale, Sibelius and Dorico.

I used to use Sibelius from 2004 until I got Dorico (and really until I switched to Dorico full-time in 2021 when they got rid of the USB license key requirement - I hate those things!). I still like Sibelius, and I still maintain my license for it, but I haven’t used it for myself in a few years. I have tutored it for some of my tutoring students at CSUN though who are still using it. 

I tried Finale back when I was picking my first program in 2004, and found it hard to think musically with for the way my brain works, but I do know people who seem to like it ok.

If you want to start with a free program to get into this world, I’d recommend MuseScore, which seems like the best free software at the moment. I’ve tried it, and it’s not what I’d use today but it’s a lot cheaper and will get the job done for a while. I don’t have a problem with it, I just have access to better, more powerful tools now. Several students at CSUN are using it, though our professor is trying to get them to switch to a professional program since in theory if you are majoring in composition you want to be a professional someday, and learning the tools in school is better than waiting until later (it doesn’t get easier to learn new programs with time!). 

Sample Libraries / Synthesizers / Virtual Instruments / Notation Playback:

This is far too big a category for me to give more than suggestions from what I use.

To define terms quickly:

Sample Library: A collection of audio recordings programmed to be played from a keyboard or other midi source, frequently of acoustic instruments, but you can find sample libraries of almost any kind of sound nowadays. Played from a “sampler” program, the most common of which right now is Kontakt from Native Instruments (part of any Komplete package).

Synthesizer: In this context we’re mostly talking software synthesizers, so this is a computer program that generates sound using any of a variety of methods (generally not including sampling, or only using it as one of several techniques, otherwise it’s a sampler, not a synthesizer). Common methods include subtractive synthesis, additive synthesis, FM synthesis, Wavetable synthesis, (sampling), physical modeling synthesis, and granular synthesis. Many modern synthesizers use several of these methods at once for a wider variety of sounds. See below for a guide to these synthesis techniques.

Virtual Instrument: can be either a sample library or a synthesizer, most likely using physical modeling or additive synthesis, to generate sound like a specific kind of acoustic instrument. I generally only include really expressive instruments, not sample libraries that have recordings of things like crescendos. You should be able to play all your expression in your way like a “real” instrument rather than relying on canned performances in the sample library. But there are some sample libraries that can do this, such as instruments from Samplemodeling.

Notation Playback: what it says, a program that plays back audio in a notation program. They all come with their own sounds, or you can use a third party tool.

For notation playback in the “Big 3”, I like Wallander Note Performer best. It’s good enough that for many projects I write in notation software I don’t need anything else, but it’s not perfect. I haven’t found anything better overall though, and it really is quite good.

What else you get will depend entirely on what you want to do with your studio and music, but for a general all-around studio, I’d recommend the Native Instruments Komplete package (ideally Komplete Ultimate, though that’s pricey - but if you can afford it I’d go with that as it’s very comprehensive at first. I worked up to it from lower versions and upgraded over time to Komplete Ultimate). I’d also recommend Arturia’s V-Collection for general electronic synthesizers.

I’m a huge fan of physical modeling synthesis, especially for re-creating acoustic instruments. Many composers use sample libraries for that, and they can be fine, but the good ones are very expensive and the cheaper ones are often a little limiting. I started with Garritan Personal Orchestra, and today there are several decent free general orchestral libraries. At CSUN, I helped them install the Project SAM’s Free Orchestra 1 + 2 and a few other sample libraries in the Music Tech lab for this current semester (Spring 2024), though I haven’t used them myself very much since I have commercial grade products now.

But I’ve tried to switch all my acoustic emulations to modeled synths as much as possible, so at the moment I have:

  • Woodwinds and Brass: AudioModelling SWAM collection, with EWI USB wind controller when it works.

  • Strings: Synful Orchestra (I want to get the AudioModelling strings some day), or I just export audio from NotePerformer into my DAW. I also record live violin and viola myself over the synths when I have time to do so, usually 4 tracks each for first violin, second violin and viola.

  • Percussion: This one varies - I have modeled drumset from IK Multimedia’s MODO Drum; samples from several companies, including Native Instruments Komplete and Garritan; and epic percussion from Komplete, SoundIron, and 8dio. I also use Heavyocity Evolve from Komplete for percussive loops, orchestral tam-tam and bass drum, and sometimes for transition effects.

  • Piano, Keys, Pitched percussion: Modartt Pianoteq, Modartt Organteq, and Arturia V-Collection

  • Bass (guitar and upright): IK Multimedia MODO Bass

I also really like Applied Acoustic Systems Modeling Collection, and in particular Chromaphone, which is a lot of fun for making electronic sounds with an organic feel.

My favorite general electronic synthesizers today are Xfer Records’ Serum, and Arturia’s Pigments.

That should be enough to get you started!

Software Training:

You can find a lot on Youtube, but two of my favorite resources for professionally produced software training are linkedinlearning (formerly Lynda.com), and MacProVideo.com (which you can use even if you are on a PC). I actually wrote a paper for one of my classes at CSUN (Teaching Music in Higher Education) about how to learn software, where I mentioned some resources for specific things.

Not quite software training, but my all-time favorite book for learning about electronic musical synthesis is Martin Russ’ “Sound Synthesis and Sampling”. It’s quite comprehensive while still being pretty easy to read. Most of the books are either at the level of “if you turn the cutoff knob, the filter cutoff will change”, or “here’s some c++ code to implement an FM synthesizer, and here’s the math to prove how this works”. Russ’ book is a good middle ground, not too technical but well beyond the absolute basics. It also talks about how music studios work and has a section on performance control of synthesizers since synthesizers allow you to separate the control from the sound production and achieve some interesting results.

Extras

Most of the things above are recommended for most project studios in some form or other, but there are some other elements that can be nice to have but aren’t really necessary.

Button Controller (Macro Controller):

This is an interesting category of device - if you don’t have one you may not see why you’d need one, but once you do they make a lot of things much easier. Basically they are keyboard shortcut button machines. You set up a macro (a combination of one or more keyboard shortcuts performed in a row), and then assign them to a physical or touch screen button, which you can then layout and label in a way that makes sense to you. I think the most popular one today (at least the one I’ve seen the most and that I have) is the Streamdeck by Elgato. They make several versions. I have a “regular” Streamdeck that I used in a few different setups but eventually became primarily the control panel for my performance keyboard setup, which makes Pianoteq feel more like the “built-in sound” of my keyboard setup. I can control preset selection from my custom Streamdeck configuration, and change octaves in Pianoteq and control certain additional parameters, which also allows me to record those changes in the midi file generated when I play my improv sessions. I also got a Streamdeck XL, which has more physical buttons (32 vs. 15 for the regular), and I use this with my studio setup to control Dorico, Bitwig or Logic more effectively. These days you can find preset Streamdeck configurations for most professional software (even Zoom). I like Notation Express for Dorico and Sibelius, and I’ve used SideshowFX configurations for Bitwig and Logic.

You can also get the Streamdeck Mini with 6 buttons, though that seems too limited to me, as well as the Streamdeck+ with a few buttons and 4 assignable knobs, and they have an app for smartphones to turn your phone into another Streamdeck (for a few dollars per month subscription).

Control Surface:

There was a period when I really wanted a hardware control surface for my studio to get hands-on control over my computer plugins and mix settings. This was actually my original rationale for getting an iPad, but once I got one I discovered the way I had set up my templates didn’t lend itself to using a control surface and I’ve never really gotten into that world after all. These can be handy devices anyway, and are worth looking into if you can afford them. I don’t know as much about the current state of the market for control surfaces, but the one I really wanted was a modular system called the “Artist” series from Euphonix, later acquired by Avid, and consisted of the MC Mix (which you could get up to 4 of, for 32 faders/channel strips), MC Control (with 4 more faders for a total of 36 and a screen with soft knobs for controlling plug-in parameters), and MC Transport (which had a giant jogwheel plus transport controls - play/pause, stop, etc).

Touch Screen:

This is mostly for PC users right now, and is actually the main reason I switched to PC for my last upgrade cycle. I wish Apple would allow touch screen support for Macs. I tried this on my PC setup and it was sometimes nice to be able to reach out to control certain things from the screen. It wasn’t enough of a draw to keep me on PC for my most recent upgrade cycle, as I wanted the raw power of the new M-series chips more than touchscreens, but if you have a PC then touch screens can come in handy now and then. Not a necessity by any means, but try it out if you have a chance. If you are on Mac and want to try it (and have an iPad), then you can try things like the app Duet, which can turn your iPad into a second screen touchscreen for your Mac. Apple has a built-in screen share function for the iPad with Mac, but it doesn’t support touch screen abilities on the iPad.

Dual Screens:

This one I would almost call a necessity for your primary desktop setup - I have a hard time going back to single screen setups now for serious production work, but you can do everything with just one screen. When I’m recording at home, it’s nice to have my music on one screen and my recording software on the other simultaneously, or even for office work, the ability to see your own documents on one screen while seeing Zoom or even someone else’s screen sharing on the other is very nice. Sometimes I have a spreadsheet on one screen and my web browser on the other. Even just having an app on one screen and Finder windows (or File Explorer windows if you are on PC) on the other is convenient. Once you have two screens you’ll find all sorts of ways to use them, but if you only have one you may not miss having two.

Synthesis Techniques

Here’s a quick breakdown of the synthesis types listed above. There are other resources that go into a lot more detail. Each category also has a “Synthesizers in the wild” section where I mention some synths I’m aware of that use these methods.

A couple of useful terms to start with: An Oscillator is a sound generator that can generate a repeating waveform. Electronic oscillators usually produce some standard mathematical waveforms: A sine wave (looks like a smooth curve - you may have seen them in math class), a sawtooth wave (looks like the teeth of a saw - slopes down from a high point to a low point and then jumps up immediately to the high point, or the reverse), a pulse wave (alternates between a high point and low point without passing through points between, at least in principal - if the amount of time it spends at each point is the same, then it’s a square wave), and a triangle wave (goes from a high point to a low point and back over an equal amount of time). Noise of various forms are also common in synthesizer oscillators. These are all in reference to the shape a speaker cone makes when you play these waveforms - amplitude over time). Each waveform has a characteristic pattern of harmonics in the frequency spectrum. Briefly:

Sine - one harmonic at the frequency of the sine wave (harmonics are sine waves spaced at equal frequency intervals - 100 hz, 200 hz, 300 hz, etc., or 220 hz, 440 hz, 660 hz, 880 hz, etc.)

Sawtooth - every harmonic from a given fundamental frequency (the first harmonic), with an amplitude equal to the inverse of the harmonic number (if the first harmonic is at volume 1, then the second harmonic is volume 1/2, and the third is volume 1/3, etc). Very bright sound, and one of the default synthesis waveforms.

Square - Every odd harmonic (1,3,5, etc) at an amplitude the inverse of the harmonic number. Makes a more hollow sound than a sawtooth wave. The other “default” waveform.

Triangle - Every odd harmonic at an amplitude the inverse square of the harmonic number (Harmonic 1 is volume 1, Harmonic 3 is volume (1/3)^2, or 1/9, Harmonic 5 is 1/25, etc. A darker square wave sound.

Noise - comes in various forms. Random energy across the frequency spectrum, with different distributions available. TV Static is white noise, which is random energy averaging equal energy at every frequency, for a very bright noise sound.

All synthesizers will make use of two standard “modulators” or sources that can adjust a parameter value in the synth in real time: Envelopes and LFOs. An envelope traces a path between two arbitrary points, call them 0 and 1, and then takes a certain amount of time to travel to various points between them. The standard envelope is an ADSR envelope (Attack, Decay, Sustain, Release), which starts at 0, goes to 1 over the time defined by the Attack, Decays down to the Sustain value over the Decay time, holds at the Sustain value while you hold the key down, and then goes back to 0 over the time defined by the Release parameter. You can also have “multi-envelopes” which allow you a large number of break points where you can change either the slope or the direction of the envelope and get much more complex shapes, which can include repeating rhythms and generate grooves from holding down one note.

An LFO is a Low Frequency Oscillator, which generates a repeating signal usually using a standard waveshape like a sine wave, a sawtooth (ramp) wave, or a square (pulse) wave, and varies some parameter value using this repeating shape at a slow speed (below the human hearing range of 20 hz). Complex LFOs can have more complex shapes which might even be definable by the user, and the most complex LFOs and multi-envelopes can start to converge on the same types of shapes. Common parameters for envelopes and LFOs are volume (amplitude), pitch (frequency), or filter cutoff among many other possibilities.

Subtractive Synthesis: More or less the default synthesis technique today. Uses a sound with lots of harmonics as the starting point (often a sawtooth or square wave generated by an oscillator), and then reduces or removes some of them with a “filter” (most often a low pass filter which removes high harmonics while “passing” low harmonics, but can also be a high-pass to remove low harmonics, or band pass or band reject filters to keep only certain middle harmonics or remove only certain middle harmonics respectively). You will then usually modulate the filter cutoff parameter with an envelope, LFO, or performance control (velocity, a midi CC knob [see MIDI guide below]) to make the sound change over time. There are many variations on this technique now, and most synthesizers today will include at least one or two filters regardless of which type of synth they claim to be.

Synthesizers in the wild: Almost every synth these days has some kind of filter system unless it’s a very specific model of a particular synth that didn’t. I learned this technique from the ES series built into Logic Pro - ESM, ESE, ESP, and the more sophisticated ES1 and ES2 synths. Outside of Logic, one of the first synths I had for this was Massive from Native Instruments, and these days if I need a standard subtractive synth I usually turn to either Xfer Records’ Serum or Arturia’s Pigments (many of the analog modelled synths in Arturia’s V-Collection also do this well, including one of the most famous synthesizers of all time, the Moog Mini).

Additive Synthesis: Makes use of the principal that you can break any sound down into a combination of sine waves with certain frequencies, amplitudes and starting phases plus changing frequency or amplitude over time, and creates sound from sine waves in any configuration. In principal an additive synthesizer could make any sound that any other type of synthesis could make, but in practice it’s easier to use the other techniques where they are applicable. One of the early additive synthesizers was (is) the Hammond B3 organ, using tonewheels to generate basically 9 sine waves per key per manual and then setting their amplitudes with the drawbars. This is why many simple additive synthesizers sound a lot like Hammond organs! More sophisticated additive synthesizers today usually feature at least 500 sine waves.

Synthesizers in the wild: One of the most general powerful additive synthesizers today is Alchemy, built into (and exclusive to) Logic Pro, but many synths use some version of it. See the Synthesizer section above in the software guide for some examples for specific instruments. Another one I like that hides an additive synthesizer behind a subtractive facade (but with more advanced oscillators filters, and a frequency bending function) is Native Instruments’ Razor for Reaktor, included with many of the Komplete bundles or available as a standalone (Reaktor Player) synth. The bell sound in my track “A Solemn Quest” (in the Hybrid Orchestral / Electronic list on my Music page) comes from Razor. This technique can also be used as the sound generator for a Physical Modeling synthesizer (see below), with the modeling engine calculating the values for the sine waves and then an additive engine generating them. This is more or less how Modartt Pianoteq works as far as I can tell from playing with it extensively.

FM Synthesis: Takes two or more sounds in the audible frequency spectrum and modulates (changes) the frequency of one with the output of the other. The wave being modulated is called the carrier, the wave doing the modulating is called the modulator. When they teach FM synthesis, the waves are usually sine waves to keep things simple, but you can use any sound source for either of those. Primary parameters will be the relative frequencies of the sine waves in question, and the width of the modulation, or how far around a center frequency it goes (this is called the Modulation Index). All else being equal, a higher modulation index will be brighter, and integer ratios for the frequencies (one being the same as the other, or double or triple the other, etc.) will be pitched and non-integer ratios will get further away from pitched. You can also get more “operators” involved (an “operator” being the FM term for an oscillator coupled to an amplitude envelope, and with a frequency modulation input), at which point you can configure them in a variety of ways.

Yamaha bought the patent for the original version of FM synthesis, and released the DX series of synthesizers in the early 1980s, the most popular of which was the DX7. This one had 6 sine wave operators that could be configured in 32 algorithms (modulation routings between all 6 operators). It was a notorious pain to program, being one of the first synthesizers with a 2 line LCD display instead of one knob per function, and also one of the earliest synths to have preset capabilities. Some people got really good at programming it and sold their professional preset packs to others, which basically birthed the market for synth presets we still have today. But the DX7 was an extremely popular synthesizer, and it was heard on countless records from the 80s and redefined what synthesizers could do, so it’s worth knowing about from a historical perspective as much as anything else. FM Synthesis in general is known for bright and sparkly digital sounds, and bell sounds are some of the most common sounds with this technique as it’s hard to make them with subtractive synthesis.

Synthesizers in the wild: There are a lot of DX7 clone today, now that the patent Yamaha had has expired and given the immense popularity of the DX7. I use two of them from time to time, Native Instruments’ FM8 (v2 of their FM7 synth, one of the first clones), which is a conceptual clone that also adds a bunch of other features not in the original DX7. Some were in later spinoff models from Yamaha, others just make programming it much easier with modern software GUIs compared to the old way from the original version. The other one I like is Arturia’s DX7 clone in V-Collection. Many synths today will allow some form of frequency modulation as a concept.

--

These three techniques make up what I call the “abstract techniques” because they are often based on pure mathematical waveforms (sine, square, sawtooth, etc), and deal with sound in the frequency dimension which generally seems more abstract.

--

The next two are “audio techniques”, because they deal with sound in the amplitude domain which is the way sound travels to our ears in the physical world.

Wavetable Synthesis: This technique makes use of small snippets of waveshapes in any imaginable configuration, and then repeats them at speed to make the desired pitch. You can then set up these single cycle or short cycle waveshapes in a table together, and then scan through the table to find the one you want, or scan through it in real time as part of the sound you make, so it’s famous for rapidly morphing sounds. This would be called Wave-scanning Wavetable synthesis. Several popular synthesizers today work in this manner at least as an option, including both Xfer Records Serum and Arturia Pigments noted above. They also both have filters, so they can be used for subtractive synthesis too. Another feature of many wavetable synths is the ability to adjust the shape of the waveshapes on the fly with various kinds of distortion algorithms, and then you have a waveshaping, wavescanning wavetable synthesizer (say that 5 times fast!).

Synthesizers in the wild: Wavetable synthesis became really popular again a few years ago with a new generation of modeling technology, and many synthesizers support some version of this now. I first learned this from the wavescanning feature of Logic’s ES2 synthesizer, but today I’m more likely to use Xfer Records’ Serum and Arturia’s Pigments, which both originally billed themselves as wavetable synths first among the various features they have. Massive also has a wavescanning feature. Many computer synthesizers that are good for subtractive synthesis are actually wavetable synthesizers, it’s just that one of the tables they have (or several tables they have) have standard subtractive waveforms in them (sawtooth, square, etc), and they pretty much always have at least one and often two filters.

Sampler: As noted above in the synthesizer section of the Software guide, this is a synthesizer that loads in longer audio files (more than just a few cycles at a time, usually several seconds per file), and then assigns them to be triggered via midi. You will often have some ability to edit the audio files in the sampler, and you can usually stretch them in pitch and time so that you can take one pitched audio file and spread it across the whole keyboard. This can sound really cool when you do it and play something several octaves lower than the original sound. You will also often have a variety of built-in audio effects. The most popular sampler currently is Kontakt from Native Instruments, part of the Komplete bundle. As a synthesis technique, sampling can mangle audio in various ways to make it sound less like the audio you started with. If you are just playing the audio back “unmangled”, I prefer to thing of sampling as separate from synthesis.

Synthesizers in the wild: As I’ve mentioned a couple of times in the guide, the most popular sampling software is Native Instrument’s Kontakt. Logic has a built-in sampler called EXS24, and several sample library companies have their own proprietary systems now for their products, which will usually come with their sample libraries. Most DAW-type programs that have included instruments will have at least one built-in sampler - Reason does, Cubase does, etc. Other synths may also offer at least a rudimentary version - again, Arturia’s Pigments does. Serum only sort of has a sample feature - in certain cases you can import a file and it will try to analyze it into a wavetable, with greater or lesser success depending on the file and what you want to do with it. Alchemy in Logic also has both analysis and “normal” sampling functions.

--

The final two techniques go together only in not being in the other categories.

Physical Modeling: This technique looks at the physics of how instruments work, and tries in some way to recreate that in synthesis. This is one of my favorite methods for virtual instruments trying to recreate acoustic instruments as it’s very expressive. The simplest version of this says that most instruments feature an exciter of some kind (breath being split in a flute, a buzzing reed or buzzing lips for other wind instruments, the “stick-slip” effect of the bow for string instruments or fingers displacing a string for plucked strings, etc.). This injects energy into the system, which is then dissipated by the system in some way and filtered along the way by the body of the instrument (does this sound familiar? Harmonically rich starting point that gets filtered in some way?). The filtering mechanism is more complex than just a low pass filter, and the starting sound isn’t really just a sawtooth wave, but subtractive synthesis can have similarities. A more complex version of physical modeling is component modeling, where they run physics equations for each element of the instrument interacting, and the resulting sound can be eerily like the real thing, especially in expression.

Synthesizers in the wild: Many of the synths I mention above in the software part of the guide use some version of this technique. Some of my favorites at the moment are Modartt’s Pianoteq and Organteq, Audiomodeling instruments, and Applied Acoustic Systems’ collection, especially Chromaphone.

Granular synthesis: As much an audio effect as a synthesis technique, this starts with an audio sample of some sort and chops it into snippets of a few milliseconds, which can then be reconfigured in a variety of ways (snippets played forward but triggered in reverse order, played in order but with each snippet being reversed, every other snippet or every third, etc). You can adjust on the fly how it chops up the audio (how long are snippets, what is the amplitude envelope for each one, etc). You can make the resulting audio like an edgy version of the original sound, or something completely unrecognizable.

Synthesizers in the wild: I haven’t done too much with this technique, but I have tried it a few times with Reason’s granular synth (I can’t remember off the top of my head what it’s called - Grain?), and there are some interesting iPad apps I used a few years ago. I know at least one of the synths I do use a fair amount does it, but I don’t use that part of it so I can’t remember which. I think it might be part of Arturia’s Pigments.

MIDI

This is a rather big topic, so I’ll try to cover the basics briefly but I have entire books, video courses and even college classes I’ve taken in what MIDI is and how it works, so there’s a lot more where this comes from. I mention a couple of other resources at the end of this section.

MIDI is an acronym that stands for Musical Instrument Digital Interface. It is a computer protocol that allows computers to exchange data between them in a compatible, open standard rather than having to use any particular company’s standard for interfacing with products. It allows data about a musical performance to be transmitted between devices, between hardware or software, and between performance controllers and synthesizers or other sound generators. It has also been adapted for use in other fields, such as lighting boards in performance venues. It was first released in 1983, and has been one of the most successful open standards over the past 40 years. It has led to the modern plethora of electronic musical instruments from a wide variety of manufacturers that we enjoy today.

The first thing to keep in mind is that MIDI has nothing to do with audio or sound. It is more like notation than actual sound, in that it represents instructions for making music rather than being the music itself (see Life Tips General Musicianship Tip No. 2). It has been extended and expanded over the years with various additional protocols to be more powerful. If you think about how much computers have changed since 1983, it’s really remarkable that this standard still exists as much as it does today.

MIDI 2.0 was released a few years ago at this point (writing in early 2024), and support is being rolled out slowly for it. But the main version of MIDI today is still based on an extended version of MIDI 1.0. It is now able to be transmitted via a USB cable rather than the original 5-pin DIN connector, though you will still find jacks for 5-pin cables on many devices today as well. But in my studio all my keyboards and wind controllers use USB.

The standard itself supports a variety of message types, of which the most common are Note On, Note Off, and Continuous Control (sometimes called Control Change, both abbriviated as CC). There are 5 other types as well, including Program Change for preset selection, Pitch Bend, Channel Aftertouch, Poly Aftertouch, and System Exclusive (SysEx).

Message Types:

Note On and Note Off are used to indicate that a note has started and stopped sounding respectively (often that a key has been pressed or released, but not every MIDI instrument is a keyboard).

CC messages will allow for relatively fine grained control over a continous parameter such as volume, vibrato depth or speed, or filter cutoff frequency. Some of them can be used as switches too, and you can use a switch sustain pedal (either pedal is up or pedal is down), or these days continuous sustain pedals are getting more common for partial pedaling in classical piano and other styles. You have 128 possible MIDI CCs.

Program Change is used to select which preset you are currently using.

Pitch bend will allow you to continuously bend the pitch like a guitar string bend, or “lipping” a pitch on a wind instrument to get it in tune or slide into or out of a note. String glissando effects are also possible with pitch bend, though most of the time you will use it over a fairly narrow range - say a whole step. Most synthesizers will allow you to select larger intervals, but they may be harder to control.

Aftertouch refers to adding pressure to a key after you are holding it down. The more pressure you apply, the higher the value from 0 (no extra pressure) to 127 (max readable pressure). Channel aftertouch will apply equally to every key being held down at once regardless of which key is being pressed harder, while Poly Aftertouch will read each key individually for aftertouch.

System Exclusive was included so that manufacturers could add features that weren’t available through the base specification. These days I don’t see them used that much, but some older instruments make use of them.

You will notice that I mentioned 128 values a couple of times there. This is a critical number in MIDI - 7 bits of computer data. You might be wondering if you know about computer programming why it’s not 8 bits, or 1 byte. Actually it is, but the first bit is a switch that tells the system whether the following 7 bits should be read as a “Control Byte” or a “Data Byte”, so there’s only 7 bits of usable data. Control Bytes tell the system what the following Data Bytes are referring to. A Control Byte will use 3 bits to set which of the 8 messages is being triggered, and the last 4 bits will point to one of 16 available channels. The following Data Byte or Bytes will use all 7 bits for data, allowing 128 possible values, usually expressed as 0 to 127.

Note Messages (both On and Off) will have two data bytes following - the first gives you the note number out of 128 possible notes. By convention, MIDI note 60 is middle C, and then each number above and below that is respectively a half-step higher or lower (so 72 is the C an octave above middle C, 48 is an octave below, etc). If you start using other tuning systems then this can be modified depending on exactly which system you use, but this is true if you are using some version of our 12 note per octave system we generally use today.

The second data byte of a Note On or Off message is for note velocity, and reflects MIDI’s origins in the world of keyboards. Velocity is the speed at which a key on a keyboard is depressed, which usually translates into how loud a sound is as if the sound were a piano. You can also think of this as how hard you hit the key, but it’s actually better to think of it as how fast you hit the key, as this will reduce tension in your playing, something discussed all the time in piano technique.

MIDI CC Messages will have two data bytes specifying the CC number (0 to 127) followed by the new value it should take (from 0 to 127).

You can have up to 128 Program Changes per channel, and there will be only one data byte - the number of the new program to change to.

Pitch Bend used two data bytes together for 14 bits of resolution giving you much finer control over the precise pitch compared to only 128 different possible values.

Channel Aftertouch only sends one data byte with the new value of the aftertouch setting. Poly Aftertouch sends two bytes, one for the note in question and one for the new value.

System Exclusive will vary based on what they are using it for.

How to use MIDI

So how does all this affect a practicing musician vs. someone making the instruments? The biggest thing is de-coupling the playing mechanism from the sound source. Most acoustic instruments come with their own playing mechanism and you can’t readily swap in completely different types of playing mechanisms. Usually you can’t decide to blow into a violin, or to pluck or bow a clarinet. You can theoretically swap the action in a keyboard instrument, but in practice that’s not something people usually do. But with electronic instruments using MIDI, you can use different performance controllers for the same synthesizer, and even more you can play several synthesizers with the same control mechanism. I have my primary controller keyboard and that’s how I play most of my sounds in my studio setup, from a variety of sound generators. But I also have small portable keyboards I can use with my setup, and I have a wind controller, and even my iPad can be set up to generate midi data from apps that could be used to control any given synth. Some of my tracks have featured the wind controller being used on electronic sounds in synths like Serum to give a different kind of motion to my sounds compared to the usual velocity, modwheel or envelope / LFO (see Life Tips Composition Tip No. 23 for more about motion in synth patches).

This type of system is also what allows one person to make large, multi-instrument tracks like orchestral mockups or electronic soundscapes. You don’t need a large number of people to get a sound like a large number of people, and with another nice trick of MIDI, you can pretend you play all the instruments even if you have no idea how to make a sound on an actual horn or trumpet, for example. The trick I’m referring to is the fact that MIDI is a control protocol, so you can change the tempo and record your parts at half speed, and then speed them up to full speed with no loss in quality like you’d have in audio recording actual acoustic instruments. The one thing you will need to be careful of if you do this is that you phrase things differently at a slow tempo than a fast tempo - most of the time you will hold a note for more of its full length if you are slower than if you are faster, so if you fail to account for that when recording slowly, the full tempo version will sound overly legato. You need to play it like it’s fast, but slowly, and this is harder than you might think. It’s still easier than getting that good at the actual acoustic instrument though.

One more thing to keep in mind is that you can make MIDI instruments do things that are impossible for the acoustic instruments so you need to be careful about learning to write for acoustic instruments using samples or synthesizers (see Life Tips Composition Tip No. 1+2 for more on that). Notation software also uses MIDI to communicate with playback software like Note Performer or various other sample libraries you might use, and notation software in particular can play tricks on you. But even in a sequencer, and even with a wind controller (for wind parts), there are still traps you’ll fall into if you are not careful. One easy one to fall into is mixing poorly - it’s really easy to write in a way that won’t work in a live ensemble and then crank a track in the mix to fix it, which is not available in a concert environment much of the time. You can mic instruments in a concert hall, but you should try to write in a way that you don’t need to mic them for balance. If you are writing straight to a recording without going through live players, then you can write anything that sounds good on the computer, but even there things will often sound better if you write in a way that could work in a hall if you are writing for what would be an acoustic ensemble.

Finally, one other thing that often trips up students when they first get into computer studios and MIDI is that MIDI is not audio, and vice versa. You can’t convert a midi track to an audio track - you have to record the output of a synthesizer or virtual instrument that is receiving the midi data. You can have different synths interpret the same data and it will sound very different. In some sequencers, they might have an option that looks like converting the midi data to audio (often called “bouncing” - Logic Pro has an option to “Bounce In Place”, for example), but under the hood it is recording the audio output of the synthesizer to an audio track. There are programs now that try to scan an audio track and transcribe it automatically to MIDI, and sometimes they work well and sometimes they don’t. They generally work better on monophonic, clean audio with only one instrument playing - chordal instruments like guitar and piano often give them trouble, and I’m not aware of any program that can transcribe an orchestra recording to a score automatically. Most people can’t do that very well, though I know a couple of people who claim to be able to do that. I can do that to some degree depending on how complicated the score is and how good the recording was, but that’s extremely difficult in general and well outside the expected level of any college musicianship or ear training class.

MIDI is very useful in the music studio, and it is a good idea to become extremely familiar with how it works. Several books I’ve used have helped me with that, including chapters in both “Understanding Audio” by Daniel M. Thompson, and “Sound Synthesis and Sampling” by Martin Russ, referenced above (in the Hardware Training and Software Training sections, respectively). I also like the video series “MIDI Demystified” from MacProVideo. After that, it’s a matter of playing with it a lot and practicing - treat your studio like it’s a real musical instrument you have to practice like any other, and it will greatly reward you!