free geoip Jayson's Blog - jaysonKnight.com
jaysonKnight.com
A conduit to the voices inside my head.

Jayson's Blog

  • Dynamically Inject Validation Controls From A Base ASP.NET Page

    I've been out of work this week sick (which really sucks...I don't do very well with idle time, plus I haven't been sick enough to miss work in many years), so what better time to catch up on some blogging.

    We had an interesting scenario at work recently whereby we needed to dynamically attach a number of asp.net validation controls to various TextBox controls on all of our existing pages. Specifically we had written a custom validation control which would test a control to make sure it didn't contain any HTML tags (a common enough scenario). This post assumes knowledge of authoring custom asp.net validation controls, i.e. inherit from BaseValidator and override EvaluateIsValid. For anyone needing a refresher, check out this MSDN post on custom validation controls.

    There are multiple ways to go about attaching asp.net validation controls to the controls they need to validate, but in the end I settled on going with a base page class that our existing base page could inherit from...you just plug in the base page and everything is automagically wired up during the page request cycle. Given that asp.net validation controls will almost always need to validate TextBox controls (more specifically, any type of control that accepts user input), all we need to do is find controls that implement the IEditableTextControl interface. Existence of this interface in the controls inheritance hierarchy means that the control supports user editing of text.

    So in short, here is the plan of attack:

    • Author our custom validation control, or utilize one of the built in asp.net validation controls.
    • Create a base page that loops through all of the contained controls, and when a control is found that implements the IEditableTextControl interface, instantiate and attach our validation control. Recursion seems to be the best way to go about this.
    • [optional] Inject a ValidationSummary on the parent page to notify the user of any failed validation attempts.

    Here is the code for a base page that dynamically attaches a custom validation control called HtmlInputValidator:

    public class HtmlValidationBasePage : Page
    {
        protected HtmlValidationBasePage()
        {
            
        }
     
        protected override void OnLoad(EventArgs e)
        {
            List<Control> controls = new List<Control>();
     
            FindControls<Control>(this.Page.Controls, controls);
     
            if (controls.Count > 0)
            {
                controls.ForEach(delegate(Control control)
                {
                    IEditableTextControl textControl = control as IEditableTextControl;
     
                    if (textControl != null)
                    {
                        HtmlInputValidator handler = new HtmlInputValidator();
                        handler.ControlToValidate = control.UniqueID;
                        handler.Display = ValidatorDisplay.Dynamic;
                        handler.Text = "Failed Validation for control " + control.ID;
                        handler.ErrorMessage = "Failed Validation for control " + control.ID;
                        handler.SetFocusOnError = true;
                        handler.EnableClientScript = false;
                        handler.ID = control.ID + "Validator";
                        control.Controls.Add(handler);
                    }
                });
            }
     
            ValidationSummary summary = new ValidationSummary();
            summary.ShowSummary = true;
            summary.DisplayMode = ValidationSummaryDisplayMode.List;
            Page.Form.Controls.Add(summary);
     
            base.OnLoad(e);
        }
     
        // Recurse through all of the controls on the page
        protected T FindControls<T>(ControlCollection controls, List<T> foundControls) where T : Control
        {
            T found = default(T);
     
            if (controls != null && controls.Count > 0)
            {
                for (int i = 0; i < controls.Count; i++)
                {
                    found = controls[i] as T;
     
                    if (found != null)
                    {
                        foundControls.Add(found);
                    }
     
                    found = FindControls<T>(controls[i].Controls, foundControls);
                }
            }
     
            return found;
        }
    }

    Of course this base page could (and should) be further extended to support turning validation on/off, or only validating certain groups of controls...the above sample is simply for sake of brevity and should serve as a starting point only.

    I've always been a huge proponent of authoring a custom base asp.net page class from which the rest of your pages will inherit...it is without a doubt the easiest (and cheapest) way for you to get reusable functionality distributed to all of your asp.net pages with minimal coding effort. Their power really starts to shine in scenarios such as the one outlined in this post.

    See attached file below

  • Syntax For Generic Type Declarations In Type Elements

    I was recently working on implementing a provider based design for a project I'm working on which also happens to make heavy use of generics throughout the provider architecture. The signature of the type to be used in the <providers> section of the config file is MyType<T, V>, however I kept getting the dreaded "Unable to load type 'MyType' from assembly 'MyAssembly'" error when attempting to run the application. After about 30 minutes of wringing my hands wondering what the heck was going on, I remembered that generic types have a different signature when declared in text form. The fix was simple, instead of declaring it as

    type="MyAssembly.MyType, MyAssembly"

    This needs to be changed to

    type="MyAssembly.MyType`2, MyAssembly"

    Where the number after the '`' is the number of generic type parameters in the type's signature. It would have been nice if the runtime could have offered a hint like it does in other situations. This may be generics 101 for some, but I'm posting this in hopes it'll save someone else some time if Google picks up the keywords in this post.

  • Using an HttpHandler to Forward Requests to a New Domain

    One of the hardships of moving to a new domain is intercepting and forwarding traffic from the old domain name over to the new one (without losing any traffic/google ranking/etc).  Custom HttpHandlers in .Net make this scenario ridiculously easy to overcome.  Pre-.Net, you would have had to roll your own ISAPI filter (in C++ no less) to accomplish something this seemingly trivial (though there is probably lots of pre-rolled code floating around out there, and more than likely some pre-baked turnkey 3rd party solutions as well).  Anyone who’s worked with custom ISAPI stuff knows that it has a habit of making trivial stuff not so trivial anymore…it’s a total PITA.

    Enter the IHttpHandler interface, which in my opinion makes ASP.NET one of the most powerful web frameworks out there in how easy it allows developers to write their own IIS Http Handlers.  I’ve utilized it quite a bit in my development forrays, and it hasn’t failed me yet.  In this specific case (migrating from zerotrilogy.gotdns.com over to jaysonknight.com), the raw URL (the stuff after the domain) itself stayed the same (as it should), just needed to swap out the domain name for incoming requests and redirect to the new domain.  So, here’s what needs to be done:

    • Write a class that derives from IHttpHandler.
    • Intercept all requests (i.e. map all incoming requests to the HttpHandler written above).
    • Parse the URL, swapping out the domain name.
    • Redirect the requests to the rewritten URL.

    Here’s the code for the Redirect class (very much simplified…the URL's should actually be stored in a configuration file, but just keeping it simple here):

    
     using System;
     using System.Web; 
     namespace Dottext.Web
     {
           ///
           /// Summary description for Redirect.
           ///
           public class Redirect : IHttpHandler
           {
                 public Redirect() {}
      
                 #region IHttpHandler Members
      
                 public void ProcessRequest(HttpContext context)
                 {
                       string newURL, oldURL;
                       string oldURLLocal = "localhost/jaysonblog";
                       string oldURLRemote = "zerotrilogy.gotdns.com/jaysonblog";
                       string newURLRemote = "jaysonknight.com/blog";
                       oldURL = context.Request.Url.ToString().ToLower();
                       if (oldURL.StartsWith("http://localhost"))
                       {
                             newURL = oldURL.Replace(oldURLLocal, newURLRemote);
                       }
                       else
                       {
                             newURL = oldURL.Replace(oldURLRemote, newURLRemote);
                       }
                context.Response.AddHeader("Location", newURL);
                context.Response.StatusCode = 301;
                context.Response.End();
                 }
      
                 public bool IsReusable
                 {
                       get
                       {
                             return true;
                       }
                 }
      
                 #endregion
           }
     }
     

    In essence, you have complete control over the HttpContext and can pretty much do anything you want with it.  It’s really obscenely simple.  All that’s left to do is map all incoming requests to the Redirect class in the web.config file (we’re redirecting all requests in this case, though you can map whatever content you need (.aspx, .asmx, etc)):

    
     <httpHandlers>
         <add verb="*" path="*" type="Dottext.Web.Redirect, DotText.Web" />
     < /httpHandlers>
    

    The great thing about this type of solution is that it doesn’t matter where the user is coming in from…you can intercept it all and handle accordingly, this basically just serves as a broker of sorts.  The biggest caveat is that Response.Redirect will generate a ThreadAbort exception do to an intrinsic calling of Response.End.  The end user won’t see the exception, but it causes the CLR to have to work overtime in handling the exception under the hood and will degrade performance, so make sure the application is in a different application pool in IIS (6.0 of course…and if you’re not running 6.0, application pools are a very compelling reason to migrate…but that’s another post).  Happy redirecting.

  • Fedora Core 3 -- Virtual PC

    Hello from Fedora Core 3 (running on VPC)! I should mention that I am a complete linux noob before I continue (I can spell l-i-n-u-x, and that's about it). This isn't my first foray into linux land (I had a round with SuSE 7.0 previously...perhaps 3 years ago?), but it is indeed my first successful one, I never could find drivers for my Voodoo 4 video card for SuSE, so it was basically unusable. Onwards to now...

    I won't comment on how I feel about Open Source Software (OSS, and linux in particular) other than to say while I like the idea in concept, in reality it still has a long way to go. The install via VPC wouldn't have been possible without this guide, and even after following those steps, I was limited to 800x600 resolution in VPC. I was quickly having flashbacks of my previous attempt at a linux install, but a quick google found this post which seemed promising. It took me another half hour or so of poking around to figure out that the XF86Config file doesn't exist on Fedora...on Fedora it's named Xorg.conf and lives in /etc/x11. The configuration mentioned in the link works fine though, and I'm now happily running at 1600x1200. So why did I choose to start playing around with linux? No, I'm not jumping ship from MS products...I make my living as a .Net developer, so that would be similar to suicide. First reason, I like new (free) tech toys, but the second (and main) reason is to start toying around with Mono, which could actually further my .Net career...a no brainer.

    A little background first... I grew up on Macintosh. My mom is a graphic designer, and in an effort to keep up with the graphic design world she purchased a Mac when I was 12 (circa 1990). Needless to say, I was hooked immediately. Mmmm, I still remember the little critter (Macintosh LC for any afficionados out there) like it was yesterday...12 inch monitor (sporting all of 256 colors...I know this to be true because I counted them all), 8 megabyte hard drive, and a whopping 2 megabytes of RAM...all running on Mac OS 7. I would literally spend hours each day learning everything I could about it, and also played quite a bit of King's Quest (my first ever PC video game...Mom wouldn't buy me Leisure Suit Larry, so I had to settle for that). Little did I know that this would be the basis for my future career. Thanks Mom!

    Why the sentimental paragraph above? Hmmm...this is where I'm supposed to segway into my point I guess. When I switched to Windows from Mac a few years later, I felt like a fish out of water. What?!? No menu bar across the top of the desktop? And what's with this mouse with more than one button? A context menu...wtf is that? What does the registry do? Ad nauseum. I'm reliving that now trying to find my way around linux. While it's not as dificult as I thought it would be, it's definitely quite quirky. For example, Fedora doesn't seem to support MP3's out of the box (!!!). The built in (preferred) media player is Helix (Helix is the OSS fork of RealPlayer), and it complains if you feed it an MP3 file...but thankfully it tells you where to go to get a player that will happily accept and play one (RealPlayer for linux). Installing it is not a trivial task, but there is a bit of documentation that walked me through it (involving a terminal window and a few commands). The average user will find this simply unnacceptable IMO, but perhaps I overlooked something. That being said, Firefox works beautifully, and almost all of the Firefox extensions available on Windows are also available on linux, so my browsing experience is virtually identical. AMSN gives me an MSN compliant messenger (again, a non-trivial install). I don't want this to be a "linux isn't ready for the desktop" post, but I can say with some certainty that the average user would have quickly grown frustrated and would have probably given up on the OS having experienced what I have so far.

    I'm a technical guy, so I intend on using linux for technical reasons. Mono seems to be making some inroads, so it seems only natural that I should put it to the test. The first step? Installing it. If simply getting MP3's to play on linux was tough, I can't wait to see what challenges Mono presents. Cheers, and more to come on my Mono trials and tribulations.

  • List of Bugs I've Experienced (So Far) with Community Server 1.0

    Here is a list of the bugs I experienced while migrating from .Text .95 to Community Server 1.0 (as it pertains to the blog piece of CS only)…it’s obvious that I still have some work to do to get everything fixed: 

    • CommentRSS/API was generating Http 404 errors for RSS subscribers (fix is here).
    • Individual Post Category RSS feeds were generating Http 404 errors for RSS subscribers (fix is here).
    • Comments made to blog posts on the website itself were not generating Email responses; generated the following exception (fix is here, writing a simple trigger hacks it):

      System.Runtime.InteropServices.COMException (0x8004020E): The server rejected the sender address. The server response was: 553 5.5.4 ... Domain name required for sender address Anonymous

      User Agent / IP Address
      /

      Path
      as HTTP

    • RSS Comments show comment author as Anonymous; the feeds section of the website displays the name the author used in the comment (no fix yet, working on it); feed readers should show the same information.

    • Trackbacks show up as Anonymous; should display the name of the site that generated the trackback (no fix yet, working on it).

    • Comments/Feedback on blog posts in the admin section shows the author as being Anonymous; should display the value the author left in the comment (no fix yet, working on it).

    • By default, Community Server only allows a specific subset of HTML elements to be used in posts; this is configurable in the communityserver.config file (edit your communityserver.config file to allow the HTML elements/attributes you need; not extremely intuitive IMO).

    • Referrals migrated over from .Text don’t page correctly in the Admin/Referrals section; only the last 20 are displayed with no paging mechanism to see the rest of them (no fix yet, not a show stopper so I probably won’t fix it).

    • The timezone for RSS feeds was incorrectly set to GMT with no option in the Admin section to change this (fix is here, though you’ll need the CS source and a recompile).

    • Feedback in the Admin section for blogs should link back to the actual comment; you have to manually search for the post to view the feedback (no fix yet, working on it).

    • Trackbacks (sometimes) format incorrectly by using URL encoding for special characters (no fix yet, and this issue is probably last on my list).

    • Clicking the "Comments" link from the main page takes you to an anchor at the top of the comments, requiring scrolling up to see the "Add a comment" link; I prefer the old school "Show all the comments, and have the 'add comment' bit at the end" -- the present situation means a lot of scrolling up and down to read comments and post a response, and there's no way of seeing the comment you may be responding to without opening a second browser window (no fix yet, I have a feeling this will be addressed in 1.1 though as a lot of people are unhappy with this).

    • Unable to comment posts via CommentRSS/API (no fix yet, working on it).

    • There has been mention of a ton of issues with the version of FreeTextBox that ships with Community Server; I use BlogJet for posting so thankfully I haven’t seen these issues firsthand.  I do know that even though the version that ships with CS (v3.0) is supposedly compatible with Firefox, there’s some functionality that still doesn’t work.  I personally hate free text box and would recommend that folks use a 3rd party posting tool.

    • Link Categories/Links don’t sort properly when migrated over from .Text (manually updated the SortOrder column of the cs_Links/cs_LinkCategories tables).

    • Through some trial and error, figured out that the “Email me replies to this post” option under “Advanced Settings” in the post editor doesn’t do anything.  I confirmed this with Scott over at Telligent; it’ll be removed in 1.1.  The reason I tracked this down is that it’s set to false when posting from BlogJet; attempting to set it to true didn’t work in trying to resolve my email issues…given that it does nothing, makes sense.  The email fix is mentioned above.

    • Tons of build issues when compiling from source.  It compiles in Debug mode, but switching to Release broke everything on my machine (assembly references), and then Debug wouldn’t work either.  If you’re going to compile from source, just leave it in debug mode and manually copy over udated assemblies to your target machine.

    • This one is strictly my opinion:  The CS database (as it relates to the blog piece only) is a trainwreck; violates 2nd normal form quite a bit.  For example, there are no less than 3 tables that deal directly with blog posts themselves, and there is a lot of redundant/duplicate data in each table.

    • Again, strictly my opinion:  CS blogs really abuse the hell out of the Anonymous user account.  Blogs are supposed to be anonymous in nature, I don’t expect my audience to all have an account set up in CS (though if you do set one up you get some added features)…so everything gets lumped into a generic anonymous account which makes hard to keep track of who’s doing what on my site.  This may work well for forum type scenarios, but it doesn’t lend itself well to the blogging model.

    So, obviously not a trivial list of bugs/issues/complaints.  That all being said, there a few things that make Community Server a compelling product:

    • Search.  This was loooong overdue; makes it very easy to find pertinent content.
    • For those that can get FreeTextBox to work (and don’t mind using IE), it’s a huge improvement over the version that shipped with .Text.
    • In the admin section, there’s a reports section that will list all of the exceptions generated by CS, making it incredibly easy to track down bugs and either fix them or report them.
    • User accounts.  I know that most folks are reticent to sign up for accounts on sites, but it’s nice to have this option.
    • Initial deployment is much easier than it was with .Text (bugs aside).  Out of the box, it works pretty well if you’re willing to put some time into fixing the minor annoyances.
    • For those hosting (n) number of blogs, the notion of blog groups is quite nice, and the aggregate page looks great.
    • And finally, though a little overwhelming, the admin piece is a huge improvement over .Text.

    If I had it to do over with, I would have waited for 1.1…it’s been an interesting experience so far though.  It’s all about learning right?

    I have a feeling this list isn’t complete; I’ll post new bugs (and hopefully fixes) as they present themselves.  If anyone notices anything, please ping me and let me know.  And of course if anyone has fixes to the issues listed above, let us all know.  Cheers.

     

  • Starting A New Job On Monday (7/16)

    It has been a very interesting 2 months since my last job related post; in a nutshell I am not in Seattle and I was not able to take the position with Microsoft that I've been talking about. I have an entirely separate post that I'm working on which goes into more detail as to exactly what happened as it's a story that needs to be told, however I'd like to get settled into my new role before I post it.

    Did he say new role? I think he did!...so what's this new role? After Microsoft and I decided to go our separate ways about a month ago, I was contacted about a position here in Charlotte for a company that (to be honest) I'd never heard of: Skanska USA, which is the American arm of Skanska AB which is based out of Sweden. They are the 3rd largest building company in the world, and are #433 on the Fortune World 500 this year. Their IT group also made the 2007 CIO 100 list, and they've been named to the Information Week 500 on a regular basis over the past few years. The point is that they are a solid IT group...I did my homework this go round since a previous engagement turned out to be a huge disappointment in terms of overall technological prowess (or lack thereof). Needless to say, I am extremely impressed with Skanska.

    The interview process I went through with Skanska was one of the more unique experiences I've ever done. I won't give away their entire recipe to the secret sauce, but a huge part of their company philosophy is being able to have conversations with workers of any level, from the guy sweeping the floors all the way up to CxO level management, and as such I had to give a how-to presentation in front of the entire Charlotte office as part of the screen, with one stipulation: It couldn't be about anything technical (tell a geek they can't be technical for x amount of time, and interesting things are bound to happen). I did mine on how to fold origami cranes. Readers may not know this, but I have a pretty healthy fear of public speaking; it went off smashingly though as over the years it's something I've gotten used to, and channeling the fear is no longer a problem. From a technical standpoint it was also one of the more involved processes I've been through (dare I say even harder than the Microsoft interviews?)...lots of practical tests. When all was said and done, they spent around 20 man-hours total getting to know me...very impressive.

    I think one of the more captivating aspects of their development group is that when I arrived, I expected there to be a pretty large team of guys writing code what with all the case studies and public notoriety they have gained over the past few years. Sufficed to say, the development team is in the single digits. I even asked during the screening process something to the effect of "man, you guys must write a lot of [efficient] code"...their response was "you better believe it." Sounds like my kind of group of guys.

    I start on Monday (7/16), and that should also mark the end of my blogging hiatus as I've been doing minimal postings over the past few weeks while I've focused on landing a position which fits my career goals for both now, and the future. I'm happy to be aboard, and I have no doubt this new role will broaden my technical horizons as they are doing some amazingly innovative stuff within their technology stack and are a very advanced group.

    Sidenote: One thing that really impressed me with the overall market this go round (besides the fact that it's a job hunters market right now) is that all of the groups I spoke with were familiar with Community Server. Since I'm a CS MVP, I think that definitely worked out in my favor, so congrats to the Telligent team...your product is making inroads into the corporate IT realm which of course is a huge market to tap into. Maybe I'll get to use CS in some of my daily work in my new role!

  • Community Server Migration Script -- Round 3

    Exciting news on the .Text to Community Server migration script; Robert McLaws came across my previous post and tapped me on the shoulder to beef up the engine for multi-blog migration scenarios so he could wrap a slick WinForms GUI wizard around it, so I’ve done just that.  From the prototypes he’s sent me, it looks very promising…who doesn’t like a sexy GUI over a command line interface, right?  The architecture of the migration engine itself is quite simple; pass it an array of .Text BlogID’s (from the blog_Config table), and it dynamically builds a corresponding DTS package for each of the BlogID’s.  The packages can either be saved to the server, or executed directly from the engine itself. 

    There is only one thing that can be said about DTS…it’s f’ing fast.  I’m also equally impressed with how easy it is to dynamically generate DTS packages from code (though figuring out all the table/column mappings was the bulk of the work, and another story altogether).  I’ve posted my bits for the migration engine here (core engine is in DotTextCS.Engine.dll), though the frontend is a console app, connection properties need to be set via a standard .config file, and each user + blog needs to be manually configured via the CS admin pages before executing the utility.  See my comments here for further instructions.  Robert’s wizard will alleviate all of the manual-ness, so unless you’re feeling industrious (or have a trivial number of blogs to migrate), wait for the 1.0 release, hopefully to be posted within a couple of days, so stay tuned.

  • Sacked by Microsoft on XSharp

    sadly i've been asked by Microsoft (the speaker in the X# video no less) to remove the link to the presentation on X# delivered by Chris Lovett.  i could have sworn i snagged it from ms research, but i trust his word more than my cluttered brain.  here is the email i received:

    How did you get this? Actually, I don't event want to know, but it is pretty bad that you are making internal only video about our non-product related research project available to the general public. Please remove it, thanks.

    Chris Lovett. (the speaker in this video!)

    i personally don't see anything “bad” about posting this, i certainly did it benevolently...though i do understand the significance of something internal getting leaked onto the internet.  i really hope some of the ideas presented by X# find their way into future versions of the CLR and BCL. 

    so enjoy it while you can, i will be removing it from my server shortly.

  • This Could Only Happen In North Carolina

    A while back, the story about inhaleable alcohol broke over the internet (I can't find the specific link, but this was announced several years ago in 2004). Fast forward to today, and there's an article floating around stating the machines have been banned in my home state of North Carolina (in addition to 21 other states). My home state is at the very center of the bible belt, and is also a haven for commercial televangelism so it's not a huge surprise until you factor in the following:

    • The company that distributes the Alcohol without Liquid machines is headquartered in North Carolina
    • The reasoning behind the ban has nothing to do with the actual imbibing process, it's the fact that hangovers are averted and thus the user isn't "punished" for committing what most Christians consider to be a sin: Over consumption of alcohol.

    From the article:

    Indeed, the main knock against AWOL in the U.S. is the absence of a hazard (hangover) usually associated with alcohol consumption, an apparent advantage that, to the horror of moralists like the Rev. Creech, divorces sin from punishment.

    According to the Christian Action League, the North Carolina bill, which was unanimously approved by the state Senate in April and unanimously approved by the state House yesterday, "makes it unlawful to inhale fumes for the purpose of intoxication, or to manufacture, sell, give, deliver, possess or use an alcohol vaporizing device. It also makes it illegal to possess or sell ethyl alcohol for the purpose of inhaling." So if you pick up a bottle of vodka in North Carolina and announce, "I think I'd like to inhale some of this," you have transformed an otherwise legal beverage into contraband.

    What happened to separation of church and state? Down here in the South, that phrase is largely an oxymoron...the church wields huge influential power over local government and politics. The reasoning behind this ban is ridiculous.

    Sidenote: This story has zero impact on me; I don't drink (or in this case, inhale) alcohol. I'm also not a bible thumping southerner.

  • Interviewing With Microsoft, And Landing The Job: Part 1 -- Preparation Is Key

    As promised, I've decided to type up a short series on my interviewing process with Microsoft. I've talked before about it, however that was from a perspective of just screening and not actually landing the position. I am by no means an expert on interviews, or getting the dream job and as such this will just be my own experience with how it goes, and what ultimately worked to my advantage to help me succeed in the end. Most of this could probably be applied to interviewing with any company, but I'll put a Microsoft slant on it seeing as (maybe?) quite a few readers of this blog would jump at a chance to spend some time inside the walls of Microsoft. It's worth mentioning that I am *not* a traditional MS 'developer'...I will be working within Enterprise Support which is a completely different entity, and thus has different processes.

    This post is part 1 of this series.

    First and foremost, spend some time over on the MS careers site. On it you'll find a wealth of information about the different groups within Microsoft and what career tracks are offered in them. There is also a search page with a list of almost every job offered at Microsoft, so spend some time going through them and reading over the various requirements; you may find positions you didn't realize were even there. Of course Microsoft is first and foremost a software company, so the bulk of the jobs are related to the business of writing and selling commercial shrink-wrapped software (Windows, Office, server tools, etc). The vast majority of those traditional 'developer' jobs are located in Seattle, so you must be willing to relocate (though MS does have product groups scattered around the world, so do some research...there may be one closer than you realize).

    Interestingly enough this was a track that wasn't of much interest to me...probably because I come from an IT (Information Technology) background, which anyone in the business will tell you is a completely different beast than that of the ISV (Independent Software Vendor) realm. There are other ways to be a developer for Microsoft, but not work within a product group: Microsoft's own internal IT/Tools department (Microsoft like any other business has an IT department), or Microsoft Consulting which as a general rule is not location specific other than "must live near a major airport" as these guys travel a lot (most of my friends who work for Microsoft are employed in this capacity, and love their jobs as they get to interface with customers on a very regular basis), or as an Application Developer Consultant which are similar to architects. I'm sure there are other code centric roles within Microsoft, but these are the ones I'm familiar with offhand. The point is not to limit yourself to being a developer on a specific product, though I'm sure that's a track that will interest many.

    Once you've done your research, it's time to polish up your resume and start applying. Over the years I've kept a custom resume for each of the major groups I've applied to. Find out the technical specifics of the job and hone in on those areas in your resume. I cannot stress that enough. I can't even begin to imagine how many resumes MS Recruiting sees on a daily basis, so anything you can do to make yours stand out gives you that much of an edge over other candidates. The competition starts from the second you hit the submit button over on the MS Careers site. It's also worth mentioning that the flood approach (while easy to do) will not get you any further than hand picking a few choice positions and pursuing those more actively. I've made it a habit to check the Career page on at least a monthly basis...new jobs are popping up all the time. Perhaps most importantly: Read the requirements. If they say you need 10 years in the financial industry, that economics 101 class you took back in college won't suffice. In a nutshell, if the position seems like a longshot, it probably is.

    So you've sent out 10 resumes, now what? It's been my experience that MS Recruiting is pretty quick to contact candidates that they are interested in. I can't give a hard number, but 2-3 weeks seems to be the average for me in the past 5 years. All in all I was contacted for about a dozen positions, with about half of those resulting in some actual face to face time with the group who was interested. The recruiter who contacts you can be located almost anywhere; Microsoft has relationships with many firms throughout the world when it comes to recruiting talent. The initial call will usually be normal HR stuff, so there is no need to put on your technical hat quite yet (that'll come later)...the usual "tell me about yourself, here's a description of the position, etc." By this point the group has shown interest so unless you royally mess up, you'll get some airtime with folks in the group. It's worth mentioning that from this point forward, every conversation, every email, every type of contact period is to be considered part of the interview process. Get rid of that email stationary, colorful fonts, etc. Treat all of them as if they were clients you are already billing work to, or trying to court as a potential client.

    The recruiter will want to block off a couple of hours for you to speak with members of the team (about half will be technical, and the other managerial (or the so called "MS" part of the interview). They will usually let you pick a range of dates and times, so make sure you pick blocks where you'll be at your best. I'm not a morning person, so I always tried to do mine in mid-afternoon when I felt sharpest. I wouldn't push dates farther then 10 business days away, and heck, if you're feeling up to it then by all means schedule it the next day! I would recommend a week out, get all of the names of folks you will be screening with and write them down, and then it's time to hit the books.

    In this day and age of blogs (and especially MS employees keeping blogs) I always search for the names of the people who will be screening me. A lot of times you'll get some hits, and they might even have their own website. You should of course read their entire site/blog to get insight into the group, plus you can mention that you read their site in preparation for the screen. If you know the specific group name, you can search for it to see what you come up with. Stuff you can find on the web is as close to actually sitting beside that person as you'll get before they extend an offer. In my case, I read about a dozen blogs start to end from folks who worked in either my group, or groups similar to mine. The information I gathered from those sites was by far the most useful as they outlined situations this group dealt with on a daily basis (as well as the resolutions for many of them).

    Picking the actual books you'll need to study is a different beast altogether. This is not a steadfast rule, but it's been my experience that the majority of the stuff I think I need to read doesn't come up in the actual screen...the stuff I've found during my web researching has been more relevant. That being said, I pick 2 books (one of them from MS Press) and read them cover to cover. Treat them like college textbooks, and your final exam is the day of the screen. Use a highlighter, work through all of the exercises, etc. If you haven't finished them by the day of the screen don't worry, but do make sure to go back over whatever you felt was important the night before the screen. Don't kill yourself memorizing acronyms, layout of different configuration screens (in other words, the extreme details); concept are most important (for example, in my screens I missed the actual names of several tools, but I knew they existed and was able to describe how to use them which was good enough). Perhaps most important is to not overdo it. You aren't going to get any more out of an 8 hour cram session than a couple of 2 hour sessions spread over a couple of days. It may feel like you're cramming a bunch in, but that's just what it is: Cramming (not retaining). My rule was about 4 hours a day, and I made myself stick to it. I also spent another hour or so just working through real world examples either from my past work, or books, or contrived. Actually going through the paces is the best process in my honest opinion.

    My final word of advice for preparation is not to do any last minute cramming the day of the screen. If you do, you'll end up focusing too much on the areas you're "brushing up" on when more than likely those areas aren't the answer to the questions at hand. If you're working at an office, I highly recommend trying to take the day off. If you can't get an entire day off, take at least half the day. If you can't do that, then schedule the screens for a day on which you can. Trust me, you're not going to want office hubbub on your mind if you want a decent shot at delivering "wow." If you've done your research on the group, applied to positions you're adequately qualified for, and gone through your reading material you should be more than prepared for the actual technical content of the interview. What you will not be prepared for is the process itself which I will go over in part 2 of this series, as well as going more in depth to my actual experience this go round.

< Previous 1 2 3 4 5 Next > ... Last »

Copyright © :: JaysonKnight.com
External Content © :: Respective Authors

Terms of Service/Privacy Policy