1) If you choose to Ignore a unit test, note why it's being ignored and optionally how it is to be addressed. In NUnit, the attribute looks like this:
[Ignore("We can't run this test until the hardware is in place")]
Not only is it easier for a dev to figure out later, but it will show up in the build report, making it easy to see the reason without opening up the code.
2) Similarly, use comments in all Assert statements noting the reason for the failure. Again the NUnit syntax
Assert.AreEqual(expectedAge, actualAge, "Actual age doesn't match expected");
Without this, the NUnit report tells you that 12 didn't match 45, but it doesn't tell you whether you were comparing age or IQ.
3) Never copy/paste code. We all follow this rule in production code, but it is often ignored when writing unit tests. If every test requires a login before doing anything useful, move the login code to a separate method and mark it with a SetUp attribute. If some tests need to fail login, leave off the Setup attribute, but call the method from those unit tests that need it. Note also that you can use Asserts in these methods, so you can still verify proper execution within the methods.
4) Use constants whenever appropriate. If every unit test accesses the same server, move the server name to a string constant. This way, when you later shift to a new test box, you only replace one string.
5) Use a TearDown method to clean up after a test. Largely, this is to reduce the copy/paste of code (see item #3.) This also has to do with not using "try/finally" blocks. It is true that a failed Assert statement is simply throwing an exception. And technically, you could do code cleanup inside a finally block. But this isn't a clean solution (which is why the TearDown attribute exists.) One note: you may have to do some checking within the method. You wouldn't want to dispose a null object.
6) Never hard-code expected values from a database. I've seen numerous unit tests similar to:
Assert.AreEqual(14, user.ID);
Primary keys are particularly problematic, but any field could cause issues, especially if users (or other unit tests) update values in the table.
One approach is to pull data using some other means. Perhaps by using inline SQL. Perhaps there's another stored proc or service that returns the same info. Either method means the tests don't need to be updated simply because the data changes.
A second approach is to use mock objects and avoid real data altogether. Assuming you've coded to an interface, you can use a tool like NMock or Rhino.Mocks to simulate data retrieval.
7) Test for exception cases using the ExpectedException attribute. There's no need to write a try/catch block, just to verify that a particular exception was thrown. ExpectedException can handle that for you, saving time writing code. One argument against it is that you can only test one exception case per test. You could instead have multiple try/catch tests within a single unit test. The problem here is that NUnit logs one error per test. If you have multiple errors, only one will be logged. You won't notice the other issues until the first has been addressed.
And Most Importantly:
8) Fix broken unit tests (Keep the unit tests current.) I've seen instances where unit tests were written to verify current behavior, but were then ignored as the code was modified. A large portion of a unit test's value comes from its ability to monitor code, flagging errors as soon as they are introduced.