We’ve been working on a realtime P/L update service which pushes updates of rows out to grids displayed on client desktops.  It has been live at a number of clients for a wee while now and more and more as the upgrade cycles roll along.  The core of the problem boils down to this toy:

  1. On a client desktop, a grid with one row showing price-derived calculations for one ticker.
  2. New prices for the same one ticker arrive at a rate of P per second.
  3. The server sends out new rows at a rate of R per second.
  4. The client (receiving the R rows per second) updates the grid row G times per second.

Of course the client grid wouldn’t really update one row so frequently so take this with a grain of salt.

Supposing the server is able to update and send a row quicker than the time between price updates, and suppose that the client is able to receive and update its grid quicker than the time between arriving rows, then, if we let the server work like a rabid hamster then R = P and G = P.  I.e. if 1000 prices arrived within one second (and none after), we’d send out 1000 rows in the second, and the grid would be updated 1000 times in a second.  Perhaps server and client CPU is running at 50% each during this time and the Data Latency (time between the price arriving at the server until being displayed on the grid would be pretty small (2/P + network/etc. latency).  At the end of the second of time, CPUs would be idle.

Suppose however that CPU on the server is in use so that only 25% is available (instead of the 50% we want).  If we kept to the same policy of having to output a new row for every price that arrive then it would now take us 2 seconds to send all the 1000 messages (overall CPU work = “% x Time” being constant), meaning that by the end of it, the last price is 1 second old (latency = 1s) by the time it ends up in the client grid.

Of course you’ll see in our toy that we were silly to try and process all 1000 messages as they are all for the same ticker and row.  We could probably have processed only 10 of these (every 100th discarding the rest) and only sent 10 rows during the one second in time, updating the grid 10 times and the user would probably be perfectly fine with that and we would have used 1/100th of the CPU from before.

Suppose we implement a policy where only process up to 10 prices per second and discard the older prices.  This means that the actual inoming price rate could vary from 10 up to 1000 prices per second (or beyond) and we wouldn’t even care.  Instead of having CPU go up and down at the whim of the incoming price rate, we have restricted CPU to a more comfortable, steady, ride.  Effectively we have added a ‘mechanical suspension’ to our execution.

  • Mechanical Suspension : decouple output and resource consumption from variable input rates.

Effectively this means discarding a lot of obsolete prices, overwriting them with the latest price each time and using only the latest price each time.  In our toy example we can go further and add suspension to the row-sending from the server – e.g. no more than 5 row updates broadcast per second.  Then we could add suspension on the messages arriving at the client and also being written to the grid.

So, if “||” means suspension we have:

  • Prices || Server Price Execution || Server Row Sends ||  Client Row Msgs || Grid Updates

So fluctuations in the rates of things on the left have insulated effect on the rates (and CPU/resource consumption) of things on the right, ensure a smoother ride.

The two critical measures you want to monitor in this kind of push scenario are CPU (or whatever the bottleneck resources are) and Latency – how long it takes from a price change to get through the system.  A liberal use of performance counters throughout will turn the black box inside out.  Take advantage of points of configurability to allow the CPU vs. Latency tradeoff to be optimized on a case by case basis.  The suspension also means the system is robust under periods of low CPU availability, the end result is only higher latency for the duration of the resource downtime, not ever-growing backlogged queues with lots of work to do when CPU comes back to life.

In the example I describe imposing specific rates per second at various stages – this is only for illustration and we did not use such a specific limit.  How you want to implement suspension depends on what you want your threads to be doing and how you want to allocate them to work.  But that is another story.

PowerShellTunnel, what is it and why is it..

While using PowerShell to construct and manipulate .NET objects, did you ever wonder that it would be pretty darn cool to be able to open a PowerShell console and connect to and directly access the objects of a running application (at least the objects it exposes)?

Some things you might want to do:

  1. Ad-hoc debugging, diagnostics, or monitoring.
  2. Changing object properties or calling methods at runtime.
  3. Ad-hoc (or scripted) unit, system, or integrity tests on a live application.
  4. Simulating events and actions.
  5. Perhaps even adding or changing functionality on-the-fly.
  6. … probably many other things you might think of.

There is an existing PowerShell Remoting project which uses a remote service to which you connect to to create a PowerShell host that you talk to through your client connection.  This isn’t quite what what I was after and after hearing that PowerShell 2.0 was coming, decided to wait and see.  So PowerShell 2.0 comes and does have Remoting ability but this is also focused on the administrative desire to summon a PowerShell console on a remote host and control it from the client, but neither approach can connect to an existing application’s embedded PowerShell runspace.

So, after some reading and playing, I had a go and came up with PowerShellTunnel, a project which contains:

  1. Server-side cmdlets allowing you to start a ‘tunnel host’ from a PowerShell console (or any PowerShell runspace).
  2. Client-side cmdlets allowing you to start a ‘tunnel’ (connection) from a PowerShell console or runspace to an existing tunnel host (local or remote) and send scripts to the tunnel host console or runspace.
  3. Tab-expansion ‘works’ in that while typing a script destined for a tunnel host, pressing tab will return tab expansion results from the tunnel host’s runspace.  A gotcha (for now) is that as long as you have a ‘current tunnel’ selected, tab expansion always divert to the current tunnel’s host.
  4. An ordinary ‘embeddable’ PowerShell runspace class (hostable by any .NET app) where you explicitly specify what objects to expose (and choosing which PowerShell variable names) and what tunnel hosts to host.
  5. An example of a console application with a few simple objects that you can use to connect to from an ordinary PowerShell console.
  6. WCF is used to do all the legwork of the underlying connection, by default the code uses http.  By using WCF we avoid having to worry about transport options, security, and other issues as this should be all configurable.
  7. WCF-serializable objects can be piped into and out of the tunnel (types unknown to WCF DataContractSerializer need to be registered as known types).
  8. The cmdlets also allow an ordinary PowerShell console to act as a tunnel host (the easiest way to start playing with PowerShellTunnel is to use one PowerShell console as the host and another as the client).  Similarly an embedded runspace could start a tunnel to any tunnel host too.
  9. Any console or runspace can have multiple tunnel hosts and/or can have multiple tunnels open.

To make a long story short, if you want to try it out, I uploaded it to:

http://code.msdn.microsoft.com/PowerShellTunnel

The download is a single PowerShellTunnel.sln (VS2005).  The code gallery site above includes documentation on how to use it too.

Let me know if there is similar technology out there or if PowerShell 2.0′s Remoting can be adapted to this end (but from what I read it cannot – for example it only connects to sessions spawned from that same console which may be remote, but which are not pre-existing or embedded).  Also please comment here or on code gallery with any bugs, ideas, thoughts or pointers.

Follow

Get every new post delivered to your Inbox.