NetBIOS over TCP/IP, also known as NBT, is a bad idea whose time never should have come. We all know we shouldn’t use it, or WINS for that matter; we should just use DNS everywhere. And we also know that we shouldn’t eat a lot of bacon. But if someone has a plate of bacon ready for me at the bottom of the stairs every morning, I will eat some of that bacon, every morning. And so it is for NetBIOS: in a few cases, such as when connecting to a particular VPN, I will eat the bacon of technology and just let NetBIOS resolve the host names on the remote network.

For the last 15 years, this has generally worked well. And why not? NetBIOS is grossly inefficient–firing broadcasts of all kinds around the entire LAN (and if on a VPN, the remote network) to find out who is who and what is what–but that’s like using a tennis racket to hit a ping pong ball: you’ll hit the ball, every time.

Yesterday, NetBIOS name resolution just stopped working for me. I had put my Windows 7 workstation onto the network of a large corporate customer, and noticed I could no longer reach remote VPN machines using their NetBIOS names. That’s OK, I thought, when I get back onto my home network, all will be well. But all wasn’t well, even on my home network.

After quite a bit of googling, trial, and failure, most of it involving running various nbtstat commands on my adapters or net view commands, I found that ipconfig /all showed a working computer to have a Node Type of “Hybrid“, and my failing workstation to have a Node Type of “Peer-Peer“.

To set the Node Type to “Hybrid”, I had to edit the registry as described here, using these steps:

1) Run the registry editor and open this key: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Netbt\Parameters
2) Delete the DhcpNodeType value if it’s present.
3) If the NodeType value isn’t present, create it using type: DWORD.
4) Set NodeType to 8 (Hybrid).

Then I disabled and re-enabled my network adapter, and voila! I could once again use NetBIOS, both on my LAN and to reach remote hosts over VPN. Now that’s some good bacon!…

Read More


When I published my first post on Avoiding Migraines Resulting From Changes In Barometric Pressure in 2013, I had no idea how many fellow migraineurs would read, engage, and comment.

“Hi guys, OK so this really does work. I suffered when I lived in Virginia. Moved to Georgia, no headaches, moved back to Virginia, headaches, moved to Delaware, suffered horribly. The worst ever! Found this article, moved back to Georgia, no headaches. I’m so serious, I can live now.” – Kyle

I have been touched by the gratitude shown by many of the readers, and inspired that I have been able to help others–if not with their migraines directly, then at least with a better understanding of one apparently common migraine trigger. Many were happy to see some useful data that could help them understand the barometric pressure characteristics of places where they lived or were considering moving to. Others asked me where I got my data, some wanted to see hourly variation, and many others wanted to see global variation data.

Could u be kind & send me a list of the best worst places to live in Western Europe. I am hoping your list will identify the best place to live in UK I suspect all of the UK will be bad but I am stuck until I can retire & cant move to Spain or Malta until then…Thank u God for guiding me to this site. – Harry

For those who wanted more, this post is for you.

(The Usual Disclaimer: I’m not a doctor, and am in no way qualified to give medical advice. I organized this data for myself and for the benefit of those who believe that living in a place with less barometric variation could be good for their health, so that they could see which cities have more or less barometric variation.)

Where I Got My Data

Although the original data set I used to compile my original U.S. list does not seem to be online any longer, I was able to find a global dataset at the FTP site in the National Climactic Data Center (NCDC) public area of the National Oceanic and Atmospheric Administration (NOAA), which contains barometric pressure readings for more than 11,700 weather stations around the world. Downloading all data from 2008 through March of 2016, I constructed a database of over 322 million barometric measurements, many of them taken at intervals as short as 15 minutes.  The database size weighs in at just under 10 gigabytes. There’s so much data, in fact, that my first task was to take a sample to see if hourly or every-15-minute data would prove to be more useful than 24 hour data. If I could research global barometric variation using the daily data set, it would really save on computing resources and allow me to publish results much more quickly.

Hourly Variation

I chose 13 weather stations distributed through the world which were in larger population centers (as opposed to weather rafts or remote air force bases), and which had hourly pressure data available since 2008–there were only 476 of these to choose from, the vast majority of them in the U.S. (320) or Canada (129). I then compared the percentage of days per year that experienced my standard migraine-inducing daily variation threshold (a .20 or greater change between 24 hour measurements) with a new hourly variation threshold: a .02 or greater change between any two hourly measurements). I selected the .02 hourly threshold because, like a .20 pressure change over a 24 hour period, a .02 pressure change in an hour occurs at approximately a 20% rate throughout the data set.

Here are the data on hourly variation:


Here are those data plotted for correlation:


Other than the outlier–Denver (which as a high altitude city can expect to have greater measurement error, greater true variation, or both)–it seems reasonable to conclude that daily barometric variation is an excellent proxy for understanding hourly barometric variation.

Global Variation Data

Using daily changes, I was able to construct both a master list and several maps showing the annual barometric pressure variation of the world cities.

Let’s show the maps first, because they reveal some rather amazing patterns regarding barometric pressure variation.

Note: If you want to see the maps in full screen mode, you can click on them to get a full screen slideshow. You can also right-click and then open each image in a new tab, and if you do this, on the new tab you can zoom the browser in to closely examine the a region of interest.

The World


First, you can see that there’s not much red (more than 50% of days reaching the .20 threshold variation is quite rare on this planet), so for the most part, blue means very few days of high pressure variation, green means more days of high variation, and yellowish colors mean a lot of days of high barometric pressure variation. For my migraine patterns, I would live anywhere that is colored dark blue without a moment’s hesitation, and I would not want to live anywhere green and certainly not anywhere yellow. (Anecdotally, my migraines have been at their worst the times I have lived on the U.S. East Coast, and at their best when I have lived in California).

Second, you can see that these variations are almost perfectly related to latitude, with practically zero variation in the tropics, and latitudes in the Northern Hemisphere generally showing lower variation than counterpart latitudes in the Southern Hemisphere. There are some interesting exceptions:

  • Coastal California, Portugal, Italy, and the Balkans seem to have considerably smaller pressure variation than would be expected from their latitudes. So these are likely better than expected places to live for migraineurs.
  • The United States East Coast has high variation relative to its latitude.
  • The United States Mountain Time Zone has very high variation relative to its latitude.

Next, you can review eight detailed zoom-ins on the global map.

North America

The further south, the better, except for California, which is all blue. It is worth pointing out that there is a material difference between Crescent City, in extreme Northern California (12% of days annually cross the .20 threshold) and San Diego (1% of days), just not enough to change the colors on this particular map. (Interested viewers can download the raw data spreadsheet at the bottom of this document for more details.) Also of note, some of the highest barometric variation in the world occurs in North Dakota for some reason.


Eurasia and North Africa

Europe and North Africa follow latitudes pretty closely, with the biggest surprises in the United Kingdom and Japan. Ireland has much higher barometric variation than expected for its latitude. The East Coast of Central Japan has shockingly high variation given that it’s on the same latitude as places with almost no barometric variation like Tel Aviv, Lisbon, and Islamabad. Norway also seems to be a bit worse than comparable latitudes in Sweden or Finland.


Africa and South Asia

Ah, tropical living! Except for the unexpected swath of pressure variation in Coastal South Africa, living anywhere on this map would have you pretty safe from pressure-induced migraines.



Oceania follows latitude predictions as expected. Sydney has low variation, Melbourne is moderate, and New Zealand can get extreme on its wild southern end. I have no idea why Sydney and Melbourne don’t show up on this mapping software, where instead we see Newcastle and Traralgon.


South America

Very high and narrow mountain ranges such as the Sierras and Andes seem to throw off latitude correlation. In South America, there is a line of exceptionally high variation on the Eastern edge of the Andes. This is similar to the line of exceptionally low variation on the Western edge of the Sierras in North America.


Western Europe

In Western Europe, there are very few measurements available in Germany for some reason. As mentioned earlier, Ireland and Scotland have shockingly high pressure variation, presumably related to the legendary wind and rainfall in those areas. (In addition to not be a medical doctor, I’m also not a meteorologist. I’m just a guy who gets a lot of migraines when the barometric pressure changes, and I’m happy to know that I shouldn’t ever visit Ireland in January.) I don’t understand the blue dots in the area of Northern Poland and Lithuania, but maybe migraineurs there are getting a little bit of a break. Or maybe there’s some measurement error there.


United States

I’ve written a lot about the United States in prior articles, so I just leave it at wondering this: why does central North Dakota have the highest barometric pressure variation on the planet? If you go about 500 miles due east or west, you get to Duluth/Superior or Missoula, where there’s still a decent amount of pressure variation, but nothing like the worst variation on Earth. Denver is also much, much worse than you would expect. Another case of being on the Eastern edge of a large mountain range? Or perhaps more measurement error?



Canada is really not a good place for migraine sufferers who are triggered by changes in barometric pressure. The best major cities in Canada seem to be Vancouver, Toronto, and Montreal, so at least that covers a reasonable percentage of the Canadian population. Flin Flon, Manitoba seems particularly bad. Yes, I just wanted to write the words “Flin Flon, Manitoba”.



Zero Days of .20 Variation Over 2,000 or More Measurements

For those of you who would like to visit a place that has not experienced a single day of .20+ variation since 2008, and for which we have at least 2,000 recorded pressure measurements since that time, there happen to be 245 such places on this planet. Note that many places between the tropics have certainly had zero days of .20+ variation since 2008, but do not appear on this map because we don’t have 2,000 measurements for those places. This would likely be the case with much of Africa. To get a good look at this map, you can right-click on the map and select “Open image in new tab”, and then zoom in on the image.



The Raw Data

Saving the best for last, perhaps, feel free to download this Global-Barometric-Pressure-Threshold-Variation Excel spreadsheet. It contains the threshold variation percentage for every weather station with at least 50 daily change measurements since 2008, and the spreadsheet tabs provide both annual and month-by-month data. The spreadsheet is 3.5 MB is size, and so might take a little while to download on slower internet connections.

So, for example, if you live in Cape Town, South Africa, you could go to the Annual tab of the spreadsheet, use Control-F to search for “CAPE TOWN”, and see that at the Cape Town International Airport (CAPE TOWN INTL) has 14% of its days throughout the year (51 days) experience a barometric pressure variation of .20 or higher. If .20 pressure variation triggers a migraine headache every time, then a migraineur who lives in Cape Town could expect at least 51 migraines per year while living there. If you want to see whether this varies by season, which it does in every place that I’ve examined, you could go to the January tab, use Control-F to search for “CAPE TOWN”, and see that only 4% of days in January (perhaps one day each January) experience threshold variation. So the summer in Cape Town, as with most places, is a time of much lower barometric pressure variation. Looking at the winter in South Africa, in July, shows that 23% of days in July (an average of 7 days each July) experience threshold variation in Cape Town, which would be problematic for a migraine sufferer with a barometric pressure variation trigger.

This spreadsheet is the best way to see the month-by-month variation for the weather station closest to where you live.…

Read More



Yesterday, Apple CEO Tim Cook published a letter to Apple customers, in response to an order given by the United States Government directing Apple to provide technical assistance to federal agents attempting to unlock the contents of an iPhone 5C that had been used by Rizwan Farook, who along with his wife, Tashfeen Malik, killed 14 people and wounded 22 others on December 2 in San Bernardino, California.

A United States Magistrate judge in Los Angeles has upheld the government order, clearing the way for certain appeal by Apple to the 9th U.S. Circuit Court of Appeals, which is notoriously pro-privacy, and possible final appeal the the United States Supreme Court.

Cook’s primary concern is not around the technical assistance Apple might provide in decrypting information contained on that single phone (in fact, according to an Assistant United States Attorney Apple has already complied with over 70 such requests since 2008, see article here), but instead around the ensuing creation of legal and technical precedents which could require manufacturers to provide government agencies with encryption “back doors” in upcoming iOS releases. If upheld, the government’s order, which is argued using a rarely invoked 1789 congressional statute, could indeed provide legal precedent for the U.S. Government to require encryption back doors to be engineered into any product created by any manufacturer.

Reading Tim Cook’s letter immediately got me wondering: if the government prevails in its case, and these back doors become required, what would be the effect on the medical videoconferencing industry? As is so often the case, the devil is in the details.

To be feasible for a given videoconferencing product, encryption back doors would require three technical and operational conditions:

1) The creation by the product manufacturer of a master key which, when used, would provide to the holder of the master key the session key for an encrypted video session; and,
2) The provision of that master key by the manufacturer to a government agency upon proper request; and,
3) The possession of the encrypted data stream by the government agency.

The first condition is not terribly difficult to meet. Private session keys are required for every encrypted session, and thus the provision of those keys based on an authenticated master key is, at most, an implementation detail.

The second condition would require, one hopes, some detailed legal prophylaxis to ensure that the master key is used only in certain clearly defined, and relatively rare, circumstances. Of greater concern, however, would be the safeguarding of the master key. If one criminal, foreign agent, or manufacturer or government employee gained access to the master key for a device, the security for that device would be compromised until the master key could be changed. If the unauthorized access was obtained without the manufacturer’s awareness of the breach, then the security for that device would be compromised for an indefinite period of time.

The third condition requires that the data stream be accessible by the manufacturer, or the customer to whom the manufacturer has sold or leased the product. Of the three conditions, this is the trickiest for the government, because access to the encrypted data stream will differ for each product, deployment model, and customer. Gaining access to a video stream from an MCU hosted in the cloud by Manufacturer A is as simple as gaining network access to the Manufacturer A data center, which Manufacturer A would presumably be required to grant. However, gaining access to a peer-to-peer stream being transmitted directly from one computer to another would require foreknowledge of the internet routes to be taken by the video packets, which ranges from unlikely to impossible depending on the specifics of the connection.

If the government prevails in its case, it is possible that every large videoconferencing vendor using a Multi-point Control Unit (MCU) would be required to construct a back door, thus eliminating the possibility of absolute privacy in MCU-based videoconferencing systems, except those hosted in data centers to which the United States Government cannot demand access, or those manufactured by smaller vendors to whom the government has not applied the requirement (due to oversight, undue burden, or lack of volume). The vast majority of videoconferencing providers use some kind of MCU technology, and so it is not difficult to imagine these vendors eventually offering offshore cloud-based MCUs, or customers with On Premise deployments deciding to host part of the infrastructure offshore. Cloud customers would have the option of paying less for optimal performance where video streams would be subject to government capture and decryption, or paying more for sub-optimal performance where video streams would not be subject to government capture and decryption.

Customers of peer-to-peer systems such as SecureVideo/VSee, on the other hand, would not be affected by MCU access, because there is no MCU in a peer-to-peer system. While the government could require the provision of a master key, the government in most cases would not be able to capture the encrypted packets, and could therefore not gain access to the encrypted video streams.

As to the likely market reactions, your guess is as good as mine. Videoconferencing customers generally may not care about possible government decryption of their video streams. It is possible that most videoconferencing customers won’t care, but medical videoconferencing customers will care deeply, based on the possibility of a master key breach putting massive percentages of protected health information into unauthorized hands. If this happens, I would expect many of them to explore peer-to-peer technologies such as ours. At the very least, depending on what happens with Apple’s appeals process, this is a very important development for medical privacy professionals to keep an eye on, with respect to both videoconferencing as well as other affected technologies such as mobile devices, full disk encryption, cloud storage, secure web transactions, and whatever else you can think of that has encryption as a security underpinning.…

Read More

In a previous article, Avoiding Migraines Resulting from Changes in Barometric Pressure, we used pressure-induced migraines as an example of why a clinician might choose to relocate from their client base, and therefore need a videoconferencing service to continue meeting with patients. As part of the example, we shared lists of US cities with the least and most barometric variation. Due to requests, we are posting in full the dataset we created.

(Disclaimer: I’m not a doctor, and am in no way qualified to give medical advice. I organized this data for myself and for the benefit of those who believe that living in a place with less barometric variation could be good for their health, so that they could see which cities have more or less barometric variation.)

Update: in March 2016 I published a Global List of Barometric Variation

U.S. Cities Proportion of Days, May 2007 to May 2013, with .20 or greater change in Barometric Pressure from the previous day

Read More


Fiddler is a fantastic tool that allows developers and IT professionals to see what’s happening under the hood when a web page is requested by a user.  Somewhat simplified, when a user visits a web page, whether she knows it or not, she is using the HTTP protocol to request a web resource, and then the web server is using HTTP to respond with the HTML of the page itself.  This HTML is rendered by the browser into a pretty page, after any source files such as images or style sheets that were referenced in the HTML are downloaded, also using HTTP.

For the first time in my career, I came across a situation today where a user’s computer was crashing the moment that user visited a web page using Internet Explorer 9.  While this is apparently some issue with that particular computer–no browser should allow its host to crash, regardless of what HTML is returned–I wanted to see some Fiddler output in order to confirm that the HTML request and response causing this crash were not being intercepted and modified somewhere along the way.  The problem was this: the Fiddler log could not be analyzed or saved, because the computer was crashing the moment the web page was requested!

Fortunately, Fiddler has a scripting technology called FiddlerScript that offers hope in a situation like this.

Step 1: Click “Customize Rules”



Step 2: Add code to decode the response, and then save the response body, into the CustomRules.js file that pops up when you click “Customize Rules”:

 static function OnDone(oSession: Session) {
  // Put the URL you want to save here
  if(oSession.url.Contains("")) {

This will write the response, after Fiddler has received it, and before it is passed to the browser, to your My DocumentsFiddler2Captures folder.

Step 3: Request the crash page, and hopefully after you recover from the crash, you will have one or more HTML files in your My DocumentsFiddler2Captures folder.

Step 4: You can then compare the HTML returned in the capture to the HTML which is returned by any other computer.  If it is the same HTML, then the computer is clearly having some problem, and needs to be fixed.  If the HTML is different, then some process either on the host computer (such as a worm) or on the host network is most likely modifying the HTML to cause the crash.

Note, depending on the timing and cause of the crash, this code might need to be placed in different functions.  I started with the code in OnDone so that it would run once the entire request and response were available, but depending on the crash timing, it might have to be placed earlier.  This whole idea also might or might not work in some cases, depending again on the crash cause and timing.…

Read More

If you’re like me, someone who has been building web applications for 15 years or so, then like me, you probably freaked out the first time you pasted a screenshot into your gmail.  You thought, “what just happened?”  You thought, “wait, this shouldn’t be possible!”  And your immediate next thought was, “omg, how do I do that?”

This is not a ubiquitous functionality at the moment–I’m not able to paste a screenshot into Yahoo! Mail or WordPress right now, nor did I have a need to figure out a way to paste anything using Internet Explorer or Firefox.  In building a Knowledge Base for our support team to be able to serve content onto our website, we decided to implement the ability for them to paste a screenshot to our server using AJAX, have the server show the URL, and then allow them to upload HTML and include the screenshots by creating image tags using the TinyMCE HTML Editor.

Anyone can implement TinyMCE by googling, but the tricky part was getting the paste and AJAX to work, and mind you, as of the time of this writing, this only works in Chrome.  That’s fine for me since our support team uses Chrome, but if you can’t control the browser choice, then this method will not be as valuable to you.

First, you need to capture the paste event on your web page.  This is done using some Chrome-specific Javascript to handle the paste event, and jquery to send the image to the server via AJAX.

   document.onkeydown = function (e) { return on_keyboard_action(e); }
   document.onkeyup = function (e) { return on_keyboardup_action(e); }

   var ctrl_pressed = false;

   function on_keyboard_action(event) {
       k = event.keyCode;
       if (k == 17) {
           if (ctrl_pressed == false)
               ctrl_pressed = true;
           if (!window.Clipboard)
   function on_keyboardup_action(event) {
       if (k == 17)
           ctrl_pressed = false;

   // Paste in from Chrome clipboard
   window.addEventListener("paste", pasteHandler);
   function pasteHandler(e) {
       if (e.clipboardData) {
           var items = e.clipboardData.items;
           if (items) {
               for (var i = 0; i < items.length; i++) {
                   // Only process anything if we have an image
                   if (items[i].type.indexOf("image") !== -1) {
                       // Get the pasted item as a File Blob
                       var blob = items[i].getAsFile();

                       // Reader will read the file
                       var reader = new FileReader();

                       // This fires after we have a base64 load of the file 
                       reader.onload = function (event) {
                           // Once reader loads, sent the blob to the server
                               type: "POST",
                               url: '/Knowledge/Screencap',
                               success: function (resultHtml) {
                                   // Show the uploaded image
                       // Convert the blob from clipboard to base64
                       // After this finishes, reader.onload event will fire

Once you’ve got the paste and AJAX calls set up, the user pastes an image, and then the AJAX call sends your base64 encoded image to the server.  Here’s the actual content sent in the HTTP POST:


On the ASP.NET MVC side, I was not able to get the controller to automatically bind the posted data into a controller parameter.  It’s probably possible, but I’m under some time pressure, so I just examined the HTTP Request’s Input Stream, and picked the image from there.

   public ActionResult Screencap()
      // Get the raw input stream (return to the start of the stream first!)
      Request.InputStream.Position = 0;
      string payload = new StreamReader(Request.InputStream).ReadToEnd();

      string indicator = "base64,";
      int imageStartIdx = payload.IndexOf(indicator);
      if (imageStartIdx >= 0)
          string base64Image = payload.Substring(imageStartIdx + indicator.Length);
          byte[] fileBytes = Convert.FromBase64String(base64Image);
          System.IO.File.WriteAllBytes(saveToPath, fileBytes);
      // Return the URL of the newly saved file for display on the browser
      return Content(PathManager.ToUrl(saveToPath));

Now my support staff can add Knowledge Articles, including lots and lots of screenshots (a good thing), without ever leaving the browser window!…

Read More

So…what do Migraine Headaches induced by Barometric Pressure have to do with A lot, if you’re a clinician who suffers from these nasty pressure-induced Migraines, and you’re considering relocating away from your client base.

I was recently talking to one of our new clinicians, and we discovered that we both happen to suffer from pressure-induced Migraines.  When she told me she lived in Redding, California, which has among the higher atmospheric pressure variations in California, I asked if she had ever considered moving to San Diego, one of the major U.S. cities with the most stable atmospheric pressure.  She told me that indeed she had, and that her hope was that could help her transition her practice from her office in Redding, to a virtual practice based in San Diego, where she could see anyone within the State of California, and be free of the migraines that cost her so many days of work and so much misery.

Since I’m here to help, and the internet contains a very high ratio of raw to processed barometric pressure information, I decided to compile some lists for her (and me) on best and worst U.S. cities and states for atmospheric pressure change.  For me, a .20 change in the barometric pressure (e.g., from 30.05 to 29.85, or vice versa) triggers a migraine nearly every time, so I used .20 as the threshold, and looked at the number of days per year a city reported a .20 pressure swing in either direction.  I used data from May, 2007 through May, 2013, from 966 USGS weather stations.  The following lists summarize the results, cut in some interesting (and hopefully actionable) ways.

Update: in March 2016 I published a Global List of Barometric Variation

(Disclaimer: I’m not a doctor, and am in no way qualified to give medical advice. I organized this data for myself and for the benefit of those who believe that living in a place with less barometric variation could be good for their health, so that they could see which cities have more or less barometric variation.)

20 Major U.S. Cities with the Least Barometric Variation (days per year of >= .20 changes)

  1. Honolulu (0 days per year)
  2. Miami (4)
  3. San Diego (7)
  4. Los Angeles (7)
  5. Tampa (11)
  6. San Jose (14)
  7. Sacramento (18)
  8. San Francisco (18)
  9. Phoenix (22)
  10. New Orleans (22)
  11. Jacksonville (22)
  12. Birmingham (29)
  13. Houston (29)
  14. Atlanta (37)
  15. San Antonio (37)
  16. Austin (37)
  17. Memphis (44)
  18. Las Vegas (47)
  19. Little Rock (48)
  20. Charleston, SC (48)

Not surprisingly, it is the southern cities which have the fewest days of variation.  The “worst” list reinforces this theme:

20 U.S. Cities with the Most Barometric Variation (days per year of >= .20 changes)

  1. Augusta, Maine (128 days per year)
  2. Rapid City, SD (127)
  3. Montpelier, VT (117)
  4. Bismarck, ND (117)
  5. Boston (116)
  6. Colorado Springs (113)
  7. Denver (110)
  8. Billings, MT (109)
  9. Providence (109)
  10. New Haven (105)
  11. Cheyenne (105)
  12. Anchorage (104)
  13. Detroit (102)
  14. New York City (99)
  15. Buffalo (98)
  16. Minneapolis (98)
  17. Omaha (94)
  18. Chicago (91)
  19. Philadelphia (90)
  20. Baltimore (87)

At the U.S. State Level, here is the complete list:

  1. Hawaii (0)
  2. Florida (14)
  3. California (18)
  4. Alabama (27)
  5. Louisiana (27)
  6. Mississippi (28)
  7. Arizona (33)
  8. Georgia (35)
  9. Texas (45)
  10. Tennessee (46)
  11. Arkansas (46)
  12. South Carolina (48)
  13. Nevada (59)
  14. North Carolina (60)
  15. Oregon (61)
  16. Kentucky (62)
  17. Missouri (68)
  18. New Mexico (72)
  19. West Virginia (73)
  20. Oklahoma (73)
  21. Washington (75)
  22. Illinois (78)
  23. Virginia (78)
  24. Indiana (80)
  25. Utah (81)
  26. Ohio (82)
  27. Kansas (84)
  28. Maryland (85)
  29. Iowa (85)
  30. Idaho (86)
  31. Pennsylvania (89)
  32. Delaware (89)
  33. Wisconsin (92)
  34. New Jersey (96)
  35. Colorado (99)
  36. Michigan (101)
  37. Minnesota (101)
  38. Alaska (101)
  39. New York (102)
  40. Nebraska (103)
  41. Connecticut (106)
  42. Rhode Island (107)
  43. Wyoming (107)
  44. Montana (108)
  45. Massachusetts (111)
  46. Vermont (112)
  47. New Hampshire (115)
  48. South Dakota (119)
  49. North Dakota (120)
  50. Maine (127)

Looking more deeply, we also see major differences by season.  From April 1 to September 30, the national average is only 18 days of high barometric variation.  From October 1 to March 31, the average is 50 days.  This data is consistent with much higher reported incidence of migraines in the winter months.

Here’s a sample distribution of barometric pressure variation for Austin, Texas.  The number of days is the average number of high variation days for that month of the year, from 2007 to 2013.

  • January – 6 days
  • February – 8 days
  • March – 5 days
  • April – 4 days
  • May – 2 days
  • June, July, August, September – 0 days
  • October – 3 days
  • November – 4 days
  • December – 7 days

So, if you live in Austin, more than half of your bad migraine days will be in the three winter months December to February.  This seasonal pattern seems to hold true for most of the country.

The final cut of the data I looked at was to answer the question, “is this getting worse?”  The answer is no, the data appear from year to year within the bounds of normal random variation.

So, what does it all mean?  Mostly, that if you suffer from pressure-induced migraines, and you live in the northern U.S. states, you may be able to significantly improve your quality of life by relocating to one of the southern states, especially to southern California or Florida.  And, that if you do that and work in a medical field, is standing ready to help you telecommute in a HIPAA-compliant way.


Full list of cities is here:


Read More

I wasn’t able to find any good technical examples of how to implement JSON Web Tokens (JWT) for .NET when the key is Base 64 URL encoded according to the JWT spec (, page 35).

John Sheehan’s JWT library on GitHub is a nice starting point, and works well when the key is ASCII encoded already, but it cannot be used without modification if the key is Base 64 URL Encoded.

Here’s the solution:

// URL Encode the string, according to
//, page 35
public string Base64UrlEncode(byte[] arg)
string s = Convert.ToBase64String(arg); // Regular base64 encoder
s = s.Split('=')[0]; // Remove any trailing '='s
s = s.Replace('+', '-'); // 62nd char of encoding
s = s.Replace('/', '_'); // 63rd char of encoding
return s;
public byte[] Base64UrlDecode(string arg)
string s = arg;
s = s.Replace('-', '+'); // 62nd char of encoding
s = s.Replace('_', '/'); // 63rd char of encoding
switch (s.Length % 4) // Pad with trailing '='s
case 0: break; // No pad chars in this case
case 2: s += "=="; break; // Two pad chars
case 3: s += "="; break; // One pad char
default: throw new System.Exception(
"Illegal base64url string!");
return Convert.FromBase64String(s); // Standard base64 decoder
// Implementation of,
// section A.1.1, JWS using HMAC SHA-256 (encoding), by J.T. Taylor,
public string GetAuthenticationToken(string base64UrlEncodedSecretKey, string userId)
// Prepare authentication token
// Get Unix-style expiration date
double unixSeconds = (DateTime.UtcNow - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalSeconds;
double expiry = unixSeconds + (2 * 24 * 60 * 60);
string jwsHeader = "{" +
""typ":"JWT"," +
""alg":"HS256"" +
byte[] jwsHeaderUtf8Bytes = Encoding.UTF8.GetBytes(jwsHeader);
string encodedJwsHeaderValue = Base64UrlEncode(jwsHeaderUtf8Bytes);
string payloadJson = "{" +
""sub":"" + userId + ""," +
""iss":"service-id"," +
""exp":" + expiry.ToString("0") +
byte[] jwsPayloadUtf8Bytes = Encoding.UTF8.GetBytes(payloadJson);
string encodedJwsPayloadValue = Base64UrlEncode(jwsPayloadUtf8Bytes);
string jwsSecuredInputValue = encodedJwsHeaderValue + "." + encodedJwsPayloadValue;
byte[] jwsSecuredInputAsciiBytes = Encoding.ASCII.GetBytes(jwsSecuredInputValue);
byte[] secretKeyBytes = Base64UrlDecode(base64UrlEncodedSecretKey);
var hmacSha256 = new HMACSHA256(secretKeyBytes);
byte[] signatureBytes = hmacSha256.ComputeHash(jwsSecuredInputAsciiBytes);
string encodedJwsSignatureValue = Base64UrlEncode(signatureBytes);
string jwt = jwsSecuredInputValue + "." + encodedJwsSignatureValue;
return jwt;

Read More

Heise Security, a top German internet security firm, has done some research that will be somewhat frightening to Skype users, especially those who believe their Skype sessions retain any promise of privacy.

A recent H-online article detailed research showing that Microsoft servers are programmed to visit HTTPS (SSL) URLs typed into the Skype instant messaging application. When questioned about this, Microsoft’s response was not believable, from a technical or business standpoint.

“A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites. This explanation does not appear to fit the facts, however. Spam and phishing sites are not usually found on HTTPS pages. By contrast, Skype leaves the more commonly affected HTTP URLs, containing no information on ownership, untouched. Skype also sends head requests which merely fetches administrative information relating to the server. To check a site for spam or phishing, Skype would need to examine its content.”

The most troubling aspect here to me, is that Microsoft requires users, in order to use Skype, to accept that their information may be accessed by Microsoft; but then, Microsoft will not disclose exactly how the information will be used.

This untrustworthy approach is one of the reasons we started And I don’t think you want Microsoft in your therapy session any more than I do.…

Read More

By Jonathan (JT) Taylor, Chief Technology Officer


From a human perspective, a good videoconference is similar to a good movie.  In a good movie, there is a “suspension of disbelief”, whereby the viewer–initially well aware of being seated in a movie theater and thus disbelieving of the reality of images appearing on the screen–eventually suspends that disbelief to the point where the characters, actions, and conversations on the screen appear real.

Likewise, in a good videoconference, the participant is initially aware of communicating with the other party through a screen, camera, microphone, and speaker, but eventually this “disbelief” of in-person contact is suspended, such that after a few minutes both participants really feel like they’re meeing face-to-face.

The time-tested rules to achieve “suspension of disbelief” in videoconferencing are as simple to enumerate as they are complex to technically implement:

Rule #1) the video must be of a high enough resolution that each speaker’s eyes, hands, facial gestures, and body language can be clearly understood by each listener.

Rule #2) the audio must be clear enough so as to approximate the sounds the listener would hear if the speaker was in the same room.

Rule #3) the video and audio must flow smoothly and naturally, with no hiccups, stops, or gaps.

Rule #4) there must be no delays between the video and audio portions–when the speaker’s mouth moves, the speaker’s words must be heard at the exact same moment.

Sounds easy enough, right?  It is perhaps, until one considers the various “videoconferencing trolls” which lurk in the shadows of the internet to often confound even the most expert videoconferencers.  To get a high quality videoconference, each of these trolls must be avoided.


The Inferior Audio-Video Equipment Troll.  As I mentioned earlier, in order to videoconference, each party must have a web camera and microphone–to send the video and audio–and a screen and speakers–to receive the video and audio.  (Many users prefer to use a headset, which combines the speaker and microphone functions, and also overcomes the pesky “Echo” troll.)

A web camera that has low resolution, supports only a low frame rate, or handles light contrast poorly, will break rule #1.  As will a low resolution, or very small screen image on the viewer’s end.  On the audio side, a microphone or headset that do not pick up the speaker’s words with the proper sensitivity and digital sampling, will break rule #2.  A low quality speaker on the listener’s end will have the same effect.

These AV Equipment trolls are theoretically simple enough for each participant to fix–it just requires spending some money (usually $50 is enough for a webcam, and $30 for a headset which contains both speakers and microphone).  However, more videoconferences than I can count have failed to achieve “suspension of disbelief” due to inferior equipment.


The Slow Computer Troll.  Let’s say you’re in a videoconference, and everyone has cleverly avoided The Inferior Audio-Video Equipment Troll by having good equipment.  Your web camera and microphone are picking up your images and words really well, and sending them to your computer.  Now all your computer has to do is to encode those images and words into a digital stream of 0’s and 1’s and send them over the internet, and it has to do so at least as quickly as the 0’s and 1’s are arriving from your A/V devices.

But, alas!  It turns out that this encoding takes a lot of computer processing power (much more so than decoding, as it happens), and if your computer does not possess sufficient processing power, then your 0’s and 1’s will not be able to be sent to your videoconferencing partners at the same rate they’re arriving from your A/V equipment, and thus your videoconference will be defeated by The Slow Computer Troll, manifesting through the violation of rules #3 and #4 above.

To avoid The Slow Computer Troll, you simply need to have a good enough Computer or device. offers a computer speed test, so you can see whether The Slow Computer Troll is inhabiting your computer or not.  The latest Apple iPad (the iPad 3) runs a very lovely videoconference, so that could be a good way to avoid this particular troll.  If you want to stay with your computer to run the Troll out of town, you could consider a CPU (processor) upgrade.

Networking Trolls.  If your A/V equipment and computer are high enough quality, unfortunately there is an entire phylum of Videoconferencing Trolls which threaten your blissful videoconference: The Networking Trolls.

(Hint: you can conduct a network speed test you can use to check whether your network speed will avoid the Networking Trolls.)

Networking Trolls generally take one of three sub-forms:

The Firewall Troll.  The Firewall Troll holds sway when there is a firewall between your computer and the Internet, as often happens when you’re on a corporate network.  In this case, The Firewall Troll (and his cousin, The Network Address Translation Troll) can often prevent videoconferencing connection altogether.  The solutions to this Troll are varied and generally complex.  At, we use a combination of Video Proxies, which restrict the videoconference to ports which are commonly unaffected by firewalls, and protocols such as STUN and ICE which are specifically designed to overcome address translation issues.  I have seen several other solutions to this problem in the field, most being variants of this approach.

The Low Bandwidth Troll.  The Low Bandwidth Troll appears in the slower corners of the internet where bandwidth is less than 1 megabit per second: generally these are 3G and slower 4G mobile connections, many DSL connections, and corporate T1 networks with many users.  While the normal solution for this problem is to obtain a faster Internet connection, this is generally the most difficult to achieve, often involving high cost and lengthy lead times.  At, our platform uses a technology called “Adaptive Layering” to greatly mitigate this problem.

Adaptive Layering means that the 0s and 1s are not sent in a single stream which is then transmitted to all participants (which is how almost all other platforms operate.)  Instead, the 0s and 1s are arranged into a number of layers.  Participants who can receive the highest resolution streams receive all the layers and get a perfect experience.  Participants who cannot, receive only those layers which comprise the lower resolution stream.  In this way, the media streams are optimized for each participant.

The Intermittent Troll.  The Intermittent Troll is the King of all the Videoconferencing Trolls.  It is the most common, and the most obstinate.  The Intermittent Troll operates like this: let’s say all other Trolls have been mitigated, and the videoconference is going very well, and every participant truly has suspended their disbelief and now feels exactly as if they’re meeting in person.  Then, for one of the participants, their internet connection hiccups, either due to a routing glitch, network congestion, or a temporarily overloaded internet router somewhere between here and there.  For the vast majority of videoconferencing platforms, The Intermittent Troll will cause choppiness, gaps, freezes, and out-of-sync between video and audio.  Sometimes this condition even forces the participant to disconnect.

Dealing with The Intermittent Troll separates the top videoconferencing platforms from the also-rans.  The platform uses Scalable Video Coding (SVC) to solve the problem.  With SVC, when an intermittent network hiccup is encountered, the video (and occasionally the audio) resolution will be subtly and immediately adjusted, such that there is no interruption in the streams, and no syncing issue.  The only perceptible effect to the participants are that the background may get a little fuzzier, or in the extreme case, the entire video stream gets less sharp and perhaps even the audio loses a little clarity.  However, once the intermittent condition resolves, the software automatically and quickly adjusts back to the maximum resolution.  From a suspension of disbelief standpoint, it is similar to a part in the movie where one thinks momentarily about other roles one of the actors has played, but quickly reabsorbs back into the plot line.  In my many years in videoconferencing, the SVC technology is the only effective way I have ever seen to deal with that King of videoconferencing trolls, The Intermittent Troll.


The Echo Troll.  The last major Videoconferencing Troll is The Echo Troll.  The Echo Troll manifestation is where a speaker hears himself speak, on a slight delay, as he speaks, which has a most disconcerting effect.  It is generally caused by a listener’s microphone and speakers placed too closely together, whereby 1) the talker speaks; 2) the sound comes out the speaker of the listener; 3) the same sound enters the microphone of the listener; 4) that sound is sent back to the talker’s speaker; and, finally, 5) the talker is quickly driven insane.

Headphones are a great way to deal with the Echo Troll, as is an echo cancelling speakerphone, such as the Phoenix Duet series.  If you don’t have headphones or an echo cancelling speakerphone, then to defeat the Echo Troll it is necessary to move the microphone and speakers as far apart as possible.

If this is not possible, as with a laptop which may have built in microphone and speakers, then you’d better hope the video-conferencing platform supports echo cancellation.  Which many don’t, but of course, does.

So What?

My #3 hope is that this blog entry will have educated you significantly about how videoconferences work, the possibility to achieve “suspension of disbelief” on a high quality videoconference, the requirements to actually achieve a high quality videoconference, and why it can often be so difficult to achieve high quality due to the many Videoconferencing Trolls which can so easily harry and peck at your meetings.

My #2 hope is that you have an excellent remainder of your day, after having read this blog entry.  If this happens, I wouldn’t presume to take credit for it, but then you never really know, right?


My #1 hope is that after reading this, you’ll realize that no videoconferencing platform on this planet is as effective in shutting down the Videoconferencing Trolls, and thereby creating real “suspension of disbelief”, as, and you will see the light and sign up this very moment for a 30 day free trial so that you can see the quality difference for yourself.

Even after as few as 2 or 3 videoconferences, I believe that you’ll start to see what I’m talking about: that a high quality videoconference will make you feel like you really are in the same room as the other person, and after exchanging the subtle nonverbal cues, facial expressions, body language, and eye contact, you will be truly amazed at the power of our technology.…

Read More