Using Alfred to send quick email messages

October 28th, 2013 No comments

I’ve become a huge Alfred fan, it’s a real time saver. The Powerpack is a worthwhile upgrade as it allows you to build all kinds of quick workflows.

A workflow I use a lot is one that lets me send an out-of-band quick email message to someone. It’s incredibly convenient to be able to just hit Option-Space and type into Alfred

jim please call Mary in an hour

and have that message be sent as an email to Jim.

Here are the pieces you need to be able to do this — I assume you have Alfred and the Powerpack and know how to create workflows. Obviously customize for your own names. You will also need to have Python installed. I use Python 3 which you can download from here. Don’t forget to install it after downloading!

1) Download the zipped python script  quickmessage.py then uncompress and save it in a folder on your system. (NB I found the original version of this script on the net and modified/cleaned it up to suit my purposes)
1a) Edit that script to define your own mail hosting server, username, password etc…..same info you have specified in your regular email program.
1b) From a terminal window, go to the directory where you installed quickmessage.py and type the command chmod 755 quickmessage.py so that the script can be executed by itself.

2) Add a new workflow called Email jim with the following items in it. The contents of each block are below

2a) jim

2b) /bin/bash
Put the full path to the quickmessage.py script first. That’s the path in which you installed the python script
Replace the parameter to the -r option with the email address of the recipient
Optionally change the content of the subject line

2c) Post notification — this provides confirmation that your message was sent

3) Off you go!

 

 

 

 

 

Categories: Recommended software, Tips Tags:

Cleaning up your view of FB

September 16th, 2012 No comments

A few days ago, I posted a reference to a plugin that lets you reduce the clutter as well as hiding stuff you don’t want to see in facebook. I found out later that my posting had somehow disappeared. Ignoring for the moment how that might have happened (the answer seems rather disconcerting since I know I didn’t remove it), I figured I’d repost the information someplace other than on my FB profile.

Go to http://www.fbpurity.com/install.htm where you will find the plugin for various different browsers. I have only been playing with it for a few days but it does seem to do a very good job of letting you manage what you want to see and more importantly what you don’t want to see.

Update: There’s another plugin called social fixer (http://socialfixer.com/) that is also worth considering.

Categories: Recommended software Tags:

My keyboard rig for the Security Project

August 16th, 2012 No comments

The Security Project was created early in 2012 to perform the early music of Peter Gabriel. A key feature of this project was the inclusion of musicians who performed or were otherwise involved with Peter Gabriel the first time around, about 30 years ago. I was invited to join this project a couple of months later and we just performed our first show at B.B. King in NYC on August 11th.

Update (Feb 12th, 2013): The Security Project just completed a short tour in the Northeast in four states, ending with a second performance at B.B. King in NYC.

Update (April 16th, 2014): The Security Project just completed another tour that included about 12 shows in the Northeast and a couple of shows in Canada (Montreal and Quebec City). A highlights video from that tour can be found here.

A number of people have asked me to describe the keyboard environment I am using for this project. Given that there is in fact much more going on than can be seen from just looking at it from a distance, I figured it was time to write it up.

Keyboards

I am using four physical keyboards, set up in an L shape. An important point to understand is that there is no correspondance between any particular keyboard and any particular sound that is heard.

On my right is an Akai MPK88 weighted MIDI controller underneath a Yamaha AN1x. The Akai is nominally used for piano parts but does get used for other parts where necessary. For example, in Family/Fishing Net, I am controlling the low blown bottle sound that appears at the very beginning, the flute loop that also appears at the beginning, and then later the picolo. I’m also playing bass on it at the points where Trey plays solos. The AN1x, although nominally a full synth, is used only as a MIDI controller and its internal audio is not connected. The sliders on the Akai and the knobs on the AN1x are used to control volume and/or other real-time effects as needed.

On my left is a single manual Hammond XK3-C underneath a Korg Kronos. The Hammond is often used solely as a MIDI controller and only occasionally is its internal sound engine used, for example in the early part of Fly On A Windshield and on Back in NYC. The Korg Kronos is mostly used as a synth engine and the sounds it produces are often being played from some of the other controllers. Occasionally, I play the Kronos keyboard itself but as often as not, in that mode, the sounds that are heard are actually coming from somewhere else. (I’ll get to “somewhere else” in a moment)

Update(December 15th, 2012) : The Hammond XK3-C has now been replaced by a Nord C2D (a dual-manual organ) and provides much more flexibility in terms of both organ playing as well as effectively giving me two MIDI controllers which is very useful.

Update(July 7th, 2014): While I am still using 5 keyboards, the Kronos 61 and Nord C2D have been replaced by three Roland A800 Pro controllers. The bottom two are (by default) routed to the Gsi VB3 Hammond Emulator plugin on my laptop. The Yamaha AN1x has been replaced by a fourth Roland A800 Pro and the Akai weighted controller has been replaced by a KronosX 88 which now takes on both the roles of synth engine where needed and weighted controller.

Pedals

On the floor, under each pair of keyboards, is a Roland FC300 MIDI pedal controller. As well as five foot switches, there are also two built-in expression pedals. Several extra expression pedals are also plugged into the unit. These pedals perform different operations, depending on the song. For example, in Rhythm Of The Heat, foot switches are used to turn on or off the background single string note that is played throughout much of the song as well as the deep percussive Pitztwang note that comes in at the end of many vocal phrases. In Humdrum, the same footswitches are used to emulate the deep Taurus bass notes that are heard in the last section.

Eigenharp

The Eigenharp is a highly sensitive controller that transmits high speed OSC data as keys are played. The keys on the eigenharp detect motion in three dimensions and are extremely sensitive, allowing guitar or violin style vibrato to be easily played. In San Jacinto, I am controlling the volume of the three marimba loops. I am also playing the string orchestra part in the middle (actual sound was coming from the Korg Kronos) and then the Moog bass and steampipe sounds at the end. Those last two are produced by two soft synths running on my laptop, more on this later.

iPads and iPhone

An iPhone and one of my iPads are used to run Lemur, an app that implements a touch sensitive programmable control surface (with sliders, buttons, knobs and so forth). These are used for real time control of various sounds. The iPhone was used to start the whole show from the back of the room, where a touch of a button triggered the Rhythm Of The Heat loop that is present though most of the song. It was also used to generate the same Pitztwang sound when I was not behind the keyboards and able to reach the pedal.

A second iPad runs Scorecerer, a product developed by my company that displays sheet music and annotations as well as sending commands back to the computer to change all settings as we move from one song to the next in the set list.

Update (November 2014): A third iPad has now been added whose sole purpose is to display a small portion of the laptop display, the part showing current knob assignments. The laptop itself is no longer raised up and so its screen is less visible.

Rack

On the floor between the two sets of keyboards (and under the computer stand) is a rack containing two MOTU 828 MkIII audio interfaces a MOTU MIDI Express XT (an 8 port MIDI interface), the base-station for the Eigenharp, a network ethernet/wifi router and a power supply for the entire system. Power, audio, MIDI and USB connections (as required) from keyboard controllers and pedals are connected directly into this rack. The MOTU 828s allow me to route each sound generator (e.g., VST instruments, VST effects, Max audio, external audio from the organ and Kronos)  to independent audio output channels that go to FOH and . It is also connected to an audio receiver that returns a feed of all the instruments (except my keyboards) from the monitor mix. That feed is then mixed back in with the keyboards so that I can control how much of my rig I hear relative to the rest of the band. The router is used to allow the iPhone and iPads communicate with the computer. The reason for  many audio outputs is so that different kinds of sounds can be EQ’d seperately and so that the volume of sequences can be controlled relative to other keyboard sounds for monitoring by other band members.

Update (Feb 1st, 2013): The latest version of the CueMix software that controls the MOTU 828s now responds to OSC for remote control. Consequently I have added a third iPad (iPad Mini) that runs TouchOSC with templates for the MOTU hardware. That allows me to easily adjust the volume of the band mix I’m receiving without having to interact directly with the computer. Ultimately, I’ll take some time to reverse-engineer the actual OSC data that’s being sent out after which I will be able to use MaxMSP (see next section) to send the same data, thereby allowing me to control that volume from a slider on one of my keyboards rather than having a third iPad.

Update (Jan 12th, 2014): The two MOTU 828s have been replaced with two RME UCX audio interfaces. The benefits are improved sound quality as well as reduced latency and improved driver performance. The CueMix software has been replaced by RME’s software called TotalMix for which there is also OSC support and an iPad app so everything else remains unchanged.

Computer

The entire system is completely controlled by a Macbook Pro running custom software I developed using the MaxMSP programming environment. A description of the custom software can be found in other blog entries on this site and the extensions I developed to integrate the Eigenharp into MaxMSP can be found here.

Everything that happens is processed through this software environment. When a song request is received (from the iPad running Scorecerer), MaxMSP will load all the required soft synths, set up the appropriate MIDI routings (i.e, which parts of which keyboards play which sounds), route the audio to the appropriate outputs on the MOTU device and respond to real time controllers (knobs, sliders, buttons and pedals as well as the Eigenharp) to control volume and other parameters (e.g, filter cutoff, attack, decay, reverb or whatever is needed) for the specific song.  MaxMSP is also responsible for generating the real-time loops, both MIDI and audio and in some cases is also adding extra harmonies depending on what I’m playing.

Here is an example of some of the configuration created in MaxMSP for San Jacinto.

Soft synths and effects

While some of the sounds heard are coming from the Korg Kronos and Hammond organ (even if they’re not being played from their respective keyboards), a variety of audio plugins are in use. The most important of these are Native Instruments Kontakt and Reaktor, the GForce Oddity and Minimonsta (yep, Arp Odyssey and Minimoog emulators respectively), and the AAS UltraAnalog. Effect processing is mostly done with Native Instruments GuitarRig and IK Multimedia AmpliTube. However, some effects are done directly with MaxMSP. I used to use Arturia plugins (I love their Moog Modular and Arp 2600) but I had too much grief with their copy-protection scheme and so had to drop them.

Credits

Jerry, Trey, Michael and Brian are amazing!

Larry Fast provided me with samples and loops for some of the songs and his guidance and insights as to how some sounds were originally created and performed was absolutely critical to recreating the experience.

Jim Kleban provided me with great information as well as RMI and ARP Prosoloist samples for the Genesis songs we performed from the Lamb album.

I’d also like to thank the support team at Cycling74 as well as many users on the Cycling74 support forums. MaxMSP was a core component in making this project workable.

Norman Bedford gets a prize for organizational expertise.

Finally, my heartfelt thanks to Scott Weinberger for inviting me into this project and giving me the opportunity to achieve one of my lifelong dreams.

Conclusion

I hope this information is of interest and please feel free to submit comments and questions. I’ll do my best to respond as time permits.

Using Max with the Eigenharp Alpha

April 26th, 2012 No comments

April 26th, 2012

A new website has been created that focuses specifically on using Max with the Eigenharp. Please visit http://max4eigenharp.com for more details as well as for the Max patchers to support the Eigenharp.

Replace Apple MainStage with a Custom Max/MSP Implementation (Part 4)

February 2nd, 2012 5 comments

Well, it has been a few months since I wrote the previous articles on my development of a custom alternative to Apple MainStage. After using it for a few months, I started to figure out what else I really needed and as of today, I am comfortable that I have created an update worthy of being called version 2 (or I suppose I should call it version 2012!)

It’s going to take a while to finish this article and I’ve decided to make it visible along the way, otherwise I can’t say how long it will be before I finish it. Of course, having it visible before it’s finished will most likely act as a catalyst to get it finished ASAP.

The key new features of this version are

  • Persistence
  • Audio mixer
  • External audio routing
  • Better encapsulation

1) Persistence

The one feature that MainStage (and other similar systems) had that was missing in my library was the ability to remember the state of parameters so that the next time a song was loaded, any parameters that were changed in real time while the song was previously loaded would be automatically recalled.

2) Audio mixer

I use a MOTU 828 MkIII + MOTU 8Pre and up until now I was just using MOTU’s CueMix application to do basic volume control. However, CueMix has two flaws: a) no remote control over the mixer and b) no way to process audio through software (e.g, VST effects).

I am now explicitly routing all audio through Max objects. As you will see, there have been some nice benefits to this design. Further, I have not observed any latency issues, something that was of concern to me. The mixer has channel strips with the ability to send signals to VST effects.

3) External audio routing

The new mixer supports my external audio devices (keyboards, synths, guitars, vocals) and those can also be processed with VST effects. This will in fact allow me to get rid of quite a few external stomp pedals.

4) Better encapsulation

Among other things, I have taken the [GenericVST] object and wrapped it inside various new objects that automatically route audio to the desired mixing channel or effects send bus. It is also much easier to configure parameters of a VST and to initiate a VST edit/save cycle.

Consolidated front panel

Here’s a picture of the latest version of my consolidated front panel. You will note that it has changed significantly from the original version I described when I first started this effort. (Click the image to see a larger view)

Apart from all the extra stuff in there (compared to my original version), the key change is that the bpatcher-based panels that represent the control surfaces for some keyboards (Akai and AN1x) as well as the panel that represents the mixer (external, VST and effects) are constructed from more primitive objects at the bottom of which are UI objects whose state can be stored and reloaded centrally.

Persistence

While Max comes with several mechanisms for managing persistence (save/restore state of a coll, the pattrstorage system, the new dict objects in Max 6), the first was too simplistic, the second was too complicated and the third was not available for Max 5.1.9 and so not evaluated. However, my persistence needs were quite specific. If you turn a dial or change a slider position, directly or via MIDI from a control surface, I want the current value of that dial or slider position to be associated with the currently loaded song so that when the song is reloaded later, the value is restored, and sent out to whatever parameter is being controlled.

Turns out that the easiest way to do this was to use a [js] Max object. The [js] object is a way to write Javascript code that can live inside Max, complete with input and output ports and a way to respond to arbitrary incoming Max messages. Javascript trivially supports associative arrays so all I needed was a scheme to uniquely name every slider and dial, no matter where it lives. To see why this is the case, I need to explain how the panels and mixer controls are actually implemented. Let’s examine a simple case first, the 8 dials that make up the AN1x control surface. First, here’s that section of the consolidated front panel again.

This is actually being displayed through a [bpatcher] object, as indicated by the light blue line around it. (That line is only visible when the patcher is in edit presentation mode.) If I drill in (i.e, open up this bpatcher), then one of the items is just the 8 dials.

Similarly, drilling into this bpatcher, we can see that there are in fact 8 separate bpatchers as well as a comment field to which we will return later.

Drilling into any one of these, we find the underlying dial and integer in an object called pdial (persistent dial). This is where the fun starts:

Note that there is also a pslider (persistent slider) object and a plabel (persistent label) object both of which are implemented in the same manner. Note that at its core, the dial can receive values whose name is #2_pdial and it can send its value out as a packed name/value pair to something called #1.

Now, #1 and #2 represent the values of two arguments that are passed into the object when it is instantiated. The first argument (whose value will be the same for all objects used in a given environment) will be the name associated with a [receive] object that will pass all incoming values into a dictionary of some kind. We will see what that dictionary looks like later.

Creating unique names for user interface objects

The second argument is much more interesting. We need a mechanism where we can end up with unique names for different dials or sliders, each of which should have little or no knowledge of what else is there. To cut a long story very short, the solution I chose was to define at every level a name consisting of whatever is passed in from the parent level concatenated with whatever makes sense for the object in question. This is easier to demonstrate than to describe.

The second dial in the AN1x console receives messages addressed to a receiver with the name

AN1x1_AN1xConsole_2_pdial

Similarly, the 4th channel strip of the MOTU mixer receives messages addressed to

MainMixer_MOTUKeyboardsMixer_4_ExternalChannelStrip_ChannelStripVol_pslider

and the lower SEND dial for for the 5th channel strip is addressed by

MainMixer_MOTUKeyboardsMixer_5_ExternalChannelStrip_ChannelStripEFX2_EffectsSend_pdial

If you parse that last one, reading right to left, it says that there is a persistent dial in an EffectsSend object which represents the second EFX of a channel strip living in an ExternalChannelStrip, the 5th channel strip contained in the MOTUKeyboardsMixer which lives (along with other things) in the MainMixer patch.

The nice thing about this is that the only objects that have to know this full name are the bottom level persistent dials, sliders and labels themselves.

Connecting with the dictionary

A Dictionary object is inserted into every top level patcher that represents a song. (Remember that each such a patcher contains the MIDI routings and VSTs that are needed to play a particular song). Here’s a view of a Dictionary as contained in a song patcher.

The single parameter is the name that will be used by all persistent objects when they send their values. It can be anything you want. In my system, this name is DHJConsole.

Here’s the contents of the dictionary patcher. (Click to open full view in a separate window)

The core that makes this work is the little javascript object near the bottom (called NVPair) that is connected to a [forward] object. The NVPair implemented an associative array with functionality to insert name/value pairs, retrieve a value given a name, operations to save or restore the array to a file and a function, called dumpall that sends out all name/value pairs.

Let’s go through the steps that occur when you turn a pdial (persistent dial) named AN1x1_AN1xConsole_2_pdial to the value 6

  1. The pdial sends out a list containing two values (AN1x1_AN1xConsole_2_pdial 6) through a [send DHJConsole] object
  2. The dictionary receives this list through the [Receive #1] (half-way down on the left hand side). Remember that that #1 will have been replaced with DHJConsole when the dictionary is actually instantiated.
  3. The text insert is prepended to the list and so the message
    insert AN1x1_AN1xConsole_2_pdial 6
    is sent into the NVPair.js object.
  4. The NVPair object creates an entry called AN1x1_AN1xConsole_2_pdial in an associative array and sets its value to 6. This is known as a name/value pair. Note that if there was already such an name in the associative array, then the previous value would just get updated to the new value, 6.

Steps like these occur whenever any persistent object is modified. The NVPair object also has functions that can save and reload all names and values.

When a patcher containing a dictionary is closed, the contents of the associative array are saved to a file whose name is the same as the patcher (but with a different extension). When a patcher containing a dictionary is opened, the contents are reloaded into the associative array and then the following steps occur automatically after the patcher has finished loading any VSTs (those are the only objects that can take quite a few seconds to load).

  1. The dumpall message is sent to the NVPair
  2. This message causes the NVPair to iterate through the array, sending out every entry as a name/value pair. This is where the fun starts.
  3. For each entry, the value will be sent out first and it gets stored in a temporary variable (the [pv value] object)
  4. The [forward] object is a variant of the [send] object that allows the name to be changed. So the name that will be used is the name that comes out of the NVPair.
  5. Therefore, the value will be sent to any receiver whose name is the same as the name that came from the NVPair.
  6. Each name will exist in one single [receive] object corresponding to the persistent object that was created, as described in the “Creating unique names for user interface objects” section above. That is how each individual user interface element is updated.

In upcoming articles, we will talk about audio mixer, routing and encapsulation.

Categories: Music Technology Tags: