Friday, April 15, 2011

Simple way to backup event log on Windows Server

Is it possibile to create a simple way to backup the event log, with such as a batch file or a simple app ? I need to make it working on a customer's site, where the reference is an non-expert user. Thanks

From stackoverflow
  • If you're using Windows 2008, use the built-in wevtutil command. Example:

    wevtutil epl Application c:\temp\foo.evtx

    Otherwise, get dumpel.exe from the resource kit, or psloglist from http://technet.microsoft.com/en-us/sysinternals/bb897544.aspx

  • The Microsoft Script Center has some sample code for Backing Up and Clearing Event Logs using VBScript and WMI.

    Frank-Peter Schultze's Scripting Site has some code to clear an event log ( http://www.fpschultze.de/uploads/clrevt.vbs.txt) that you can modify to backup or backup then clear.

    If you have access to the server you can backup from the Event Viewer by right-clicking on a log and using the "Save Log File As..." command. You can save to a binary, tab delimited or comma delimited file.

  • Finally I made a little winapp using this method found on the internet:

    public void DoBackup(string sLogName)
    {
        string sBackup = sLogName;  // could be for example "Application"
        EventLog log = new EventLog();
        log.Source = sBackup;
    
        var query = from EventLogEntry entry in log.Entries
                    orderby entry.TimeGenerated descending
                    select entry;
    
        string sBackupName = sBackup+"Log";
        var xml = new XDocument(
            new XElement(sBackupName,
                from EventLogEntry entry in log.Entries
                orderby entry.TimeGenerated descending
                select new XElement("Log",
                  new XElement("Message", entry.Message),
                  new XElement("TimeGenerated", entry.TimeGenerated),
                  new XElement("Source", entry.Source),
                  new XElement("EntryType", entry.EntryType.ToString())
                )
              )
            );
    
        DateTime oggi = DateTime.Now;
        string sToday = DateTime.Now.ToString("yyyyMMdd_hhmmss");
        string path = String.Format("{0}_{1}.xml", sBackupName, sToday);
        xml.Save(Path.Combine(Environment.CurrentDirectory, path));
    }
    

    this is the source link:

    It simply works great!

  • With powershell and export-clixml its oneliner.

      get-eventlog -list | %{ get-eventlog $_.Log | export-clixml -path ($_.Log + ".xml") }
    

How can I disable "demand paging" for one of my userspace programs ?

For an dedicated test I have to disable "demand paging" for exactly one of my userspace programs

http://en.wikipedia.org/wiki/Demand_paging

Any idea how I could do this ? (embedded linux appliance; 2.6 kernel)

From stackoverflow
  • If you have the ability to modify the application, you could use the mlock() / mlockall() system calls to ensure that your memory doesn't get paged out:

    #include <sys/mman.h>
    
    mlockall(MCL_FUTURE);
    

    This will prevent all memory currently allocated, and any future memory that is allocated to this process from being swapped out. You can use the mlock() system call to get finer control over which parts of memory are locked.

    Chris AtLee : In that case, you can disable your swap device altogether :) I'm not aware of a way to do the equivalent of mlockall to another process. You could try and hack the executable to insert a call to mlockall as the very first thing to do when the application starts.

Geolocation based, wireless mesh routing alogorithm: useful?

I'm in a team and we are supposed to be setting up a 'geolocation based', ipv6, wifi mesh network to run on google android.

The idea is to use the geolocation (latitude, longitude, altitude) of each node to generate the IPv6 address, then use that to choose the closest node as the preferred node for routing through. Some optimisation is made through periodic 'routing load' messages.

http://www.v6pc.jp/apc/en/data/addressing.pdf

My understanding is: The main 'benefit' of doing a geolocation based algorithm is that each network hop will step physically closer to the destination... and we assume that travelling in the most direct path to the destination, is going to give us the most optimal routing path.

BUT, my problem with this approach is that physical proximity + load does not necessarily imply good connection strength or fast routing. What if there's a thick lead wall(?) or lots of interference?

Surely proximity is a fairly useless metric for determining routes?

The primary metrics should be primarily based on performance/load shouldn't they? Something like, signal strength, demonstrated throughput, cpu usage, time connected to network, etc. At least that was my understanding of how routing paths should be chosen.

Why is this a good idea? Is there something I'm missing?

I haven't spent a lot of time working with networks, so please put in noobmans terms.

From stackoverflow
  • The three main paramaters for any connection are bandwidth, latency and capacity. Bandwidth would be measured in bytes per second, latency in milliseconds, and capacity in % of bandwidth used. Those are all parameters you can determine for connections to neighboring nodes (Before you join the mesh, capacity used will likely be 0%)

    You need capacity because you want to avoid congested nodes even if they would have good bandwidth and latency.

    secoif : Can you see -any- benefit in a proximity based algorithm?
    Michael Todd : @secoif: Like you said, close proximity does not indicate best performance. I would use it as a first-stage filter but then use the bandwidth, etc., as MSalters mentioned to zero in on the correct path.
  • Is there something I'm missing?

    Yes, so far as I can tell there's no IPv6 support in Android yet...

How do I secure my Amazon S3 photos but still make them available via URLs?

I'm planning on making my (family) photo collection available online. I want to use S3 and build an ASP.NET site that will display the photos. I don't want the website to pull down the S3 content and return it to the browser. I want browsers to be able to go directly to S3 without affecting my ASP.NET bandwidth.

It is possible to build URLs for each photo if I set the S3 permission to public, but I only want the photos to be accessible by visitors to my website, not anyone who has the URL.

Any ideas greatly appreciated!

From stackoverflow

TFS Branch/Merge meets History View

We have a setup with a development "trunk" in our recently-migrated-to-from-VSS TFS system and developers have been doing work in branches off the trunk, which are merged back in.

We've been diligently commenting our changesets at check in time, something we never did in the VSS days. However when I right-click on a trunk file in the Source Control Explorer and choose History, I only see monolithic changesets labeled "merge from dev branch" (or whatever the developer scribbled in there when they merged.) A history entry doesn't even seem to contain info on which branch was merged in at that time, let alone any info about the changesets that make it up, or the comments that go with them.

How have other TFS users dealt with this issue? Is there another way to view the history that I'm missing here? Thanks for your help.

From stackoverflow
  • This might be what you are looking for: http://www.codeplex.com/TFSBranchHistory

    Haven't used it personally, so I can't vouch for it.

    Barry Fandango : that seems to be the closest thing i can get, i installed it and it works pretty well! thanks for the link.
  • Looking at the history of a change prior to the merge has been a bit of a pain point with TFS. So much so that Microsoft have done a lot of work to address this in the next version of TFS (TFS 2010). In TFS 2010 (when it comes out), when you get to a merge in the history view it is actually a little twistie that you can expand and go see the history for the thing that was merged which is much nicer.

    In the meantime, when I see I big monolithic merge (or branch) comment I tend to let out a audible sigh and then go find the file in the branch it was merged from in Source Control Explorer and do a view history there.

    Barry Fandango : Wish I could accept two answers but Ryan was closer. Thanks for the information, very useful.

Custom icon per file instance

Some Windows programs can use different icons for different files with the same extension.

Example

  • .sln can show a different icon depending on what version of Visual Studio the solution was made in (actually, determined by the version number in the top line in the .sln)
  • Photoshop .psd files have icons with a thumbnail of the image
  • A .url shortcut file has the page's favicon if opened in, or saved from, Internet Explorer

I'm guessing it must be custom to that computer only. On a box without Visual Studio installed, .sln files just have the default 'I don't know this program' icon. Is there something that needs to be changed in the registry?

How can I do this? I'd like to have the option of associating custom icons with files to my own programs.

[Edit] I really with I could do this in managed code. It's possible (<SDK v1.1>\Samples\Technologies\Interop\Applications\ShellCmd) but it also appears to be potentially dangerous and the wrong tool for the job in practice: http://social.msdn.microsoft.com/forums/en-US/netfxbcl/thread/1428326d-7950-42b4-ad94-8e962124043e/. I really hoped MS would have a good managed API for this kind of stuff by now.

From stackoverflow
  • Check out this C++ code on CodeProject: essentially, you write a COM handler and register it for your extension. Be aware that you can mess up the Explorer process pretty badly if you leak resources in icon handlers or shell extensions... C++ may be a challenge, but I wouldn't recommend doing this in C# or Java for reasons of memory consumption (a separate copy of the framework code for each extension/handler!).

    Dinah : Great link and good warning about .NET/Java. I've never coded in C++ before. Is this beyond my skill level unless I learn C++?
    Pontus Gagge : You can implement COM interfaces in .NET pretty easily, but the Explorer process will get bloated. It depends on what you want to achieve... but a small memory footprint is a definite plus for plugins!
    Pontus Gagge : Both C++ and COM at one go is pretty heavy duty programming: not impossible by any means, but plenty of things can trip you up, not least leakages.
    Pontus Gagge : But if you're doing it for fun, by all means try a .NET solution. I'd give some thought to testing it out on a virtual machine first, though!

Binding DropDownList to an Enumeration

How do i bind a regular asp.net dropdownlist to an enum?

From stackoverflow

Equivalent of IllegalArgumentException of Java in C++

In Java if an input argument to a method is invalid, we can throw an IllegalArgumentException (which is of type RuntimeException). In C++, there is no notion of checked and unchecked exceptions. Is there a similar exception in standard C++ which can be used to indicate a runtime exception? Or is there a common style not in the standard but everyone follows in practice for a situation like this?

Or, should I just create my own custom exception and throw it?

From stackoverflow
  • std::domain_error may be what you are looking for, but I suspect very few people use it. Most people derive their own exception types from std::exception.

  • If by invalid you mean doesn't satisfied method expected values you can throw

    std::logic_error 
    or 
    std::runtime_error.
    

    If you mean something related to casts like one object can't be converted to another - there is no exception for that and it won't be thrown automatically.

    In fact it will. But only for dynamic_cast<> on references. It will throw

    std::bad_cast
    

    I am not sure it is a good idea to throw this one by your own.

    I prefer to use logic_error and its derivatives in case someone passed wrong parameter because it is a logic error: programmer passed wrong type of argument.

    But more of all I like to use assert in such cases. Because such things like passing wrong values or types to your function can be acceptable only during development and such checks should be avoided in the release.

    Tom Hawtin - tackline : Does dynamic_cast<>() with a reference type throw a standard exception?
    David Rodríguez - dribeas : It does, a std::bad_cast exception. If it is with references. With pointers a 0 is returned and the user code must check the result value.
    Mykola Golubyev : Yeah, std::bad_cast.
  • I always use std::invalid_argument for illegal arguments.

  • Unlike Java, C++ does not have a "standard framework" but only a small (and optional) standard library. Moreover, there are different opinions under C++ programmers whether to use exceptions at all.

    Therefore you will find different recommendations by different people: Some like to use exception types from the standard library, some libraries (e.g. Poco) use a custom exception hierarchy (derived from std::exception), and others don't use exceptions at all (e.g. Qt).

    If you want to stick to the standard library, there exists a specialized exception type: invalid_argument (extends logic_error).

    #include <stdexcept>
    
    // ...
    throw std::invalid_argument("...");
    

    For the reference: Here is an overview of standard exception types defined (and documented) in stdexcept:

    exception
        logic_error
            domain_error
            invalid_argument
            length_error
            out_of_range
        runtime_error
            range_error
            overflow_error
            underflow_error
    
  • You can throw a standard exception or roll your own. You may want to include additional information in the exception you're throwing, and that would be a good reason to do your own.

    Personally, I haven't seen such domain checking in systems I've worked on. It certainly isn't universal.

  • Theoretically, I believe one should avoid throwing exceptions because of illegal parameters, unless these parameters are derived from user input. The (theoretical) reason is, illegal parameters are a programmer error and as such should be trapped during the development process.

    In practice, however, you cannot expect, especially when working with a large legacy code base, to cover 100% of your program during testing. There is therefore a risk that code passing illegal inputs will be run only in production, not during testing. Then it may be more convenient for you to have an exception thrown, caught and logged, or whatever. You have to use your judgement here. For example, if I was coding a routine which controls expensive automated equipment, I would not be above inserting run-time checks for the correctness of inputs into release code, even if it could be seen as defensive programming.

Polymorphism in WCF

Hi,

I'm looking at building a WCF service that can store/retrieve a range of different types. Is the following example workable and also considered acceptable design:

[ServiceContract]
public interface IConnection
{        
   [OperationContract]
    IObject RetrieveObject(Guid ObjectID); 

   [OperationContract]
    Guid StoreObject(IObject NewObject); 


}

[ServiceContract]
[ServiceKnownType(IOne)]
[ServiceKnownType(ITwo)]
public interface IObject
{
    [DataMember]
    Guid ObjectID;

}

[ServiceContract]
public interface IOne:IObject
{
    [DataMember]
    String StringOne;

}

[ServiceContract]
public interface ITwo:IObject
{
    [DataMember]
    String StringTwo;

}

When using the service, I would need to be able to pass the child types into the StoreObject method and get them back as their Child type from the RetrieveObject method.

Are there better options?

Thanks, Rob

From stackoverflow
  • I believe that your solution is correct. I used the same approach, and it worked quite well.

    Ariel

  • Your example will not compile because interfaces cannot contain fields, which is what ObjectID, StringOne, and StringTwo are. What you're trying to define with IObject, IOne, and ITwo is a data contract, not a service contract. As such, you should be using the DataContract attribute, not the ServiceContract attribute and classes, not interfaces.

    [DataContract]
    [KnownType(MyOne)]
    [KnownType(MyTwo)]
    public class MyObject
    {
        [DataMember]
        Guid ObjectID;
    }
    [DataContract]
    public class MyOne : MyObject
    {
        [DataMember]
        String StringOne;
    }
    [DataContract]
    public class MyTwo : MyObject
    {
        [DataMember]
        String StringTwo;
    }
    

    Notice that these are classes, not interfaces. The DataContract attribute has replaced the ServiceContract attribute. The KnownType attribute has replaced the ServiceKnownType attribute. This is more canonical from what I've seen.

    Your service contract would then be defined like this:

    [ServiceContract]
    public interface IConnection
    {
        [OperationContract]
        [ServiceKnownType(typeof(MyOne))]
        [ServiceKnownType(typeof(MyTwo))]
        MyObject RetrieveObject(Guid ObjectID);
    
        [OperationContract]
        [ServiceKnownType(typeof(MyOne))]
        [ServiceKnownType(typeof(MyTwo))]
        Guid StoreObject(MyObject NewObject);
    }
    

    You can put the ServiceKnownType attributes at the contract level (i.e., beneath the ServiceContract attribute) to have it apply to all operations of the contract.

    [ServiceContract]
    [ServiceKnownType(typeof(MyOne))]
    [ServiceKnownType(typeof(MyTwo))]
    public interface IConnection
    {
        [OperationContract]
        MyObject RetrieveObject(Guid ObjectID);
    
        [OperationContract]
        Guid StoreObject(MyObject NewObject);
    }
    

    You can use interfaces in your data contracts like this:

    interface IEmployee
    {
        string FirstName
        { get; set; }
        string LastName
        { get; set; }
    }
    [DataContact]
    class Employee : IEmployee
    {...}
    

    However, the IEmployee interface is not included in the exported metadata. So if you use svcutil to generate your proxy classes, your clients will not know about IEmployee. This is not a big deal if your service and client reside in the same application (which is good way to communicate between app domains). However if your client is separate from your service (in the overwhelming majority of cases, it will be), this becomes problematic because you'll have to duplicate the IEmployee interface on the client-side manually.

Maven2 & Swing projects: Build & run swing application

I tried to find info on how to use maven to build and run a swing application but couldn't find anything useful (maven documentation is a mess).

Can someone point me to relevant documentation? Is anyone using maven in swing development ?

From stackoverflow
  • What exactly do you want to achieve?

    A Swing application is a "normal" Java application, so without specific needs regarding the Maven configuration.

    You can have a look here to know how to create a runnable JAR file with Maven. You can also have a look here in order to create a JAR file that contains all dependencies.

  • I'm guessing that you want to run your app from a maven command. You can use the exec plugin like this:

    <build>
        <plugins>    
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.1-beta-1</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>java</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <mainClass>com.package.MainClass</mainClass>
                    <arguments>
                        <argument>arg1</argument>
                        <argument>arg2</argument>
                    </arguments>
                </configuration>
            </plugin>
        </plugins>
    </build>
    

    You may need this in your pom as well.

    <repositories>
        <repository>
            <id>Maven Snapshots</id>
            <url>http://snapshots.maven.codehaus.org/maven2/</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
            <releases>
                <enabled>false</enabled>
            </releases>
        </repository>
    </repositories>
    <pluginRepositories>
        <pluginRepository>
            <id>Maven Snapshots</id>
            <url>http://snapshots.maven.codehaus.org/maven2/</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
            <releases>
                <enabled>true</enabled>
            </releases>
        </pluginRepository>
    </pluginRepositories>
    

    The actual configuration may vary, depending on which version of the exec plugin you actually end up using - I've had success with some versions, but no success with others, so it's kind of trial and error to figure out the right version of the jar for your project. It's also kind of a pain if you have multiple developers, as arguments for one dev may not be correct for another, so it may be better just writing a batch/shell script to start the app.

    Just for completeness, here's some sample code to make an executable jar file to go with the link in romaintaz's answer.

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>com.package.MainClass</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
        </plugins>
    </build>
    
    javito : Thanks,the exec plugin did the trick. Awesome answer.

How to display accurate times for testing performance numbers in C#

Hi,

I want to test the speed of an algorithm, what DateTime overload will give me the most precise time? (I still need day/month/year/seconds but then I want milliseconds also).

From stackoverflow
  • Try using System.Diagnostics.Stopwatch to do this:

    http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx

    annakata : that's probably more in the spirit of what the OP is asking actually - @OP use ElapsedTicks on the stopwatch for fine grain
    Samuel : Yes, the OP would probably be better off with a Stopwatch.
  • Others have mentioned Stopwatch, which is indeed a good idea. However, I have a different approach: I try to measure the algorithm for long enough that the normal system timer resolution would be adequate. (I generally still use Stopwatch, as it's the right type for the job, but it wouldn't matter.) For example, in my recent IO testing I only bother to report seconds, because my tests take minutes (sometimes half an hour) to run. At that point, milliseconds are inappropriate, because they'd be lost in the noise of other processes interrupting etc.

    It's not always possible to run tests for that long of course, but it's a nice thing to do where you can. For shorter tests, I'd be wary of benchmarks which take less than about 5 seconds... a brief bit of activity from another process can have a disproportionate effect.

    Another thing to consider - measure CPU time instead of wall time:

       [DllImport("kernel32.dll")]
       [return: MarshalAs(UnmanagedType.Bool)]
       static extern bool GetProcessTimes(IntPtr hProcess, 
          out FILETIME lpCreationTime, 
          out FILETIME lpExitTime,
          out ulong lpKernelTime,
          out ulong lpUserTime);
    
       static ulong GetTime(Process process)
       {
           FILETIME lpCreationTime, lpExitTime;
           ulong lpKernelTime, lpUserTime;
    
           GetProcessTimes(process.Handle, out lpCreationTime,
                           out lpExitTime, out lpKernelTime, out lpUserTime);
    
           return lpKernelTime + lpUserTime;
       }
    
  • What about using DateTime.Now.Ticks?

    Store the value, run the algorithm, compare the stored value to the current value and there's the number of ticks it took to run the algorithm.

  • When you subtract two DateTime structures, you get a TimeSpan structure. Using TimeSpan you can get the interval between the two times.

    var before = DateTime.Now;
    System.Threading.Thread.Sleep(6500);
    var after = DateTime.Now;
    
    var timeTaken = after - before;
    var secondsTaken = timeTaken.TotalSeconds;
    var milisecondsTaken = timeTaken.TotalMiliseconds;
    

    Note you have to use TotalXXXX to get all of the XXXX otherwise it will be more like what a human would want. In this case timeTaken.Seconds = 5 but timeTaken.TotalSeconds = 65.

    Jon Skeet : Why use DateTime when Stopwatch gives a more accurate timer and you don't need to worry about things like Seconds vs TotalSeconds?
    Samuel : He did ask for a DateTime solution, so here it is. But I would recommend using a Stopwatch.
  • Here is a code example for using the System.Diagnostics.Stopwatch class:

    Stopwatch sw = Stopwatch.StartNew();
    // Put the code here that you want to time
    sw.Stop();
    long elapsedTime = sw.ElapsedMilliseconds;
    

    DateTime.Ticks is only accurate to within about 15 ms, so is not ideal for timing short algorithms.

Software solution for communication between two teams?

My department is working on a project that requires us to heavily communicate with another team, whose software ours has to interact with.

Today this resulted in a 4 1/2 hour conference call where the whole team had to attend, with very few results and no progress in the actual coding at all. To me, it was a complete waste of time, except for maybe 15 minutes where we discussed a problem I am facing in my task (without reaching any consensus on how to proceed).

The softwares have to interact, but are nominally different projects. Since both teams are from different areas, nobody is nominally in charge of both of them. So we have to resolve all differences between ourselves, with interdepartmental politics shining in all their beauty.

I am looking for a way to reduce the time spent procrastinating in those meetings. Basically, any management solution (appoint someone who is in charge of the meetings, who takes responsibility, etc.) is a no go. Since we are all developers, I think that a software solution for communicating with the other devs would be easy to sell.

I am specifically looking for something that:

  • Documents the existing interface
  • Documents the interface requirements by the opposite teams
  • Lets me establish relations between those two (as in interface A fulfills requirement 29)
  • Lets me mark an interface as implemented / tested / buggy / stable
  • Provides a report which requirements have no interface to fulfill them
  • Provides a report which requirements have not been implemented /tested yet

Do you know of any tools that could provide this? I would prefer web based solutions, and free software (as in free beer), because this is only my personal initiative, and there is no budget available.

Edit:

I know that Stephan Eggermont is right, we are having behaviour problems. But that is something that I am unable to change (believe me, I tried), hence my idea to fix the symptom instead of the problem.

From stackoverflow
  • It's called a word processor. I've been in your situation and there really is no alternative to sitting down and writing specs that both teams can agree on.

  • I've been using TeamSystem so far, and I'm very pleased with it.

    Of course it's not free, but your company may already have licences for it.

    On the plus side :

    • Bugs, Tasks and WorkItems are fully customizable (ie you can chose the exact info you want to appear without bugging everybody with hundreds of unless fields that nobody ever fills-in)

    • Bug, Tasks, WorkItems can be marked as implemented/tested/whatever you decide (You can manually configure TeamSystem to use exactly the "states" you need)

    • There's a complex query/search system which lets you get exactly the reports you want (TeamSystem uses a SQLServer database, and can easily interface with sharepoint to display reports, or you can use other tools to do queries manually)

    I've been using TeamSystem in the past when I was a PM on a medium-sized project (12 coders, 3 Business Analysts, 3 Testers). It took a few days getting everyone used to the system, but then, it's been extremely useful.

    All the following tasks takes seconds to achieve :

    • Create a new requirement (you can categorize business req. and technical req. if you want to)
    • Affect a task to someone (you can finely tune who can do that)
    • Change the status of a bug/task (again, it's tunable)
    • Checkin some code and "attach it" to a task/bug/work item
    • Getting a report of unfixed bugs, unattended tasks, etc.
    • ...
  • You could try BaseCamp or FogBugz. I haven't used either product but they are targeted at your needs. However, you still will need a project manager (not necessarily a Project Manager) who is responsible for coordinating the two teams' activities.

    Treb : You're right of course, we *desperately* need a PM. But we aint gonna get one...
  • Tools are the wrong answer.

    You're having behavior problems, not tool problems.

    Why are you meeting 4.5 hours with both whole teams? What problems are you trying to solve? It sounds as if you don't have a public agenda, with a list of people interested in the different subjects. Politics should be killed by making explicit decisions (and writing them down). Did you create a list of stakeholders and their (especially conflicting) objectives?

    When you want to introduce tooling, you should be very careful. The tooling should be politically acceptable for all parties involved, otherwise it won't be used. You might want to use De Bono's six thinking hats (see wikipedia) to evaluate tools.

    I've found 1st of April to be an excellent moment to show management their failures. Are you good at writing vision statements?

    Management has taken a detailed look at the development proces and found some improvement opportunities. Currently we are spending 4.5 hrs/wk* n people * hourly rate resolving communication issues between the dev teams. That is going to cost as x over the expected project lifetime.

    With agile practices (pair programming, daily standup) we'll be able to reduce that to 10min/day if we pair each programmer from team 1 with one from team 2. For that to work we need to invest in a webcam and an extra monitor for each team member, and sufficient bandwidth, etc...

    [edit] I've recently started doing pair programming through Skype. It works much better (with a webcam and screen sharing) than I'd have thought possible.

    Treb : I can't change the corporate culture here. I've tried, I've failed. Now I need an alternative solution.
  • I managed several projects with offshore teams. To enable collaboration a valuable practice borrowed from Scrum was the daily meeting. >> http://www.mountaingoatsoftware.com/daily-scrum

    As communication tool we simply used Skype but the tool that really made the difference for us was assembla: have a look at http://www.assembla.com. It’s not only a shared source controller hosted in the cloud it’s a powerful and effective cooperation platform.

    Treb : Will look into it, thanks!
  • I don't know of anything that is going to be a perfect fit (would be interested if there was) but you could have a look if any of these could be used to help?

    Maybe even creating your own basic app in MS Windows Office Live (you can knock up simple list data sets pretty quickly)

    or using TODOList (a project at www.CodeProject.com) with MS Live Mesh to sync the files...

    or have a browse through sourceforge for something similar.

    But ultimately someone is going to have to run a meeting to pin it all down. :(

  • Just a quick note about assembla... (not rated enough to comment)

    If you use the free plan projects are 'open' for view and search engines are allowed access to the your projects.

    regards

  • A wiki has served my teams very well. Team members can write new documents, add extended information to existing documents, include images from Visio, and so on. It has been very effective and very easy to maintain.

    Since the wiki engine (we use ScrewTurn) has hierarchical organization built-in, it's easy for us to document our APIs and business processes in the same tool.

    T.E.D. : Never tried it in this situation, but a Wiki would have been my suggestion. You need to get managers and uninterested parties out of discussions or things will drag on forever.

classic asp like recordset object, how to design it in .net?

With the help of you guys, I have created an object that is similiar to the recordset in classic asp.

public class RecordSet: List<Dictionary<string, object>>
{

}

So I can access the data like:

RecordSet rs = new RecordSet();

rs[rowID]["columnName"];

How can I get access like this also, i.e. ordinal referece:

rs[rowId][2];

Please don't ask me why I just don't use a datatable, this is partly for fun and learning and testing hehe

From stackoverflow
  • Also you want to look at the IEnumerable interface, which you can implement to be able to 'foreach' loop through the class's rows.

  • Something that would work more like a RecordSet could look like this:

    public class RecordSet {
    
       private Dictionary<string, int> _nameLookup;
       private List<List<object>> _rows;
       private int _currentRow;
    
       ...
    
       public object this[int index] {
          return _rows[_currentRow][index];
       }
    
       public object this[string name] {
          return this[_nameLookup[name]];
       }
    
       ...
    
    }
    

How can I export my MS Access Data to SQL Server?

I have a MS Access database and I want to convert it to run on MS SQL Server.

How can I export it?

From stackoverflow
  • Tools.. Options... Upsizing Wizard, if I recall correctly.

    Strictly speaking, you'll always need software to do it, unless you have 3 wishes from the ETL fairy.

  • You can also use SQL Server Integration Services to import into SQL Server (2005), rather than export.

  • If your SQL Server is SQL Server 2000 or SQL Server 7, SQL Server Integration Services mentioned by Miles D was calles Data Transformation Services. By the way: I love this tool. It's awesome.

    If for some reason you can't (or don't want to) use any of these tools, you could always write a very short piece of code (in Java, C# or whatever you feel comfortable with) to accomplish the same thing for your specific problem.

  • Use the free Microsoft SQL Migration Assistant for Access (SSMA): it's purpose is to convert Access apps to SQL: it's great and it's free.

    I wrote a blog post about it:
    http://blog.nkadesign.com/2009/ms-access-upsizing-to-sql-server-2008/

How do I edit a text field (or ntext) in SQL Server 2000 or 2005 using the GUI?

  1. Is there a way to do this with SQL Server tools?
  2. If not, is there a 3rd party tool that does this?

There doesn't seem to be a good way to edit longer text columns in the SQL Server Managers for SQL Server 2000 or 2005. While SQL Server Manager is really not for editing data in your db, what other tool does Microsoft provide that would normally allow you to do this? Every other field is pretty easy to edit, except long text fields. In Access, you could hit shift-f2 and it would pop up a nice dialog to edit your text in.

alt text

From stackoverflow
  • Generally, SQL Management Studio is an administrative tool for your database and not meant for data entry other than a quick edit here or there. Generally you would script the data or it would be entered by an application that uses the database for persistence.

    (Although I have pointed Access to my SQL Server DB for a better quick and dirty UI.)

  • In the Management Console, isn't it possible to do an "Open Table" context-menu action and then edit the data from there?

    Michael Pryor : Long multiline text fields cannot be edited this way by a normal human.
  • This may fight your bill - SQL LOB Editor.

    The other option you might want to look at is EMS SQL Studio for SQL Server.

    Marc

  • If you are specifically after a nice big multi-line edit dialogs, then yes you definitely need to look outside of the Microsoft SSMS line of tools. They don't support it.

  • I totally recommend DBVisualizer. The nice thing about it is, it supports a long list of databases and generic all JDBC Drivers (since it is written in Java). You can browse your various databases, change data and explore schema in nice graphs. It comes as a free and a personal edition for 149 $ and is totally worth it! Look at this matrix for a comparison.

    You'll be able to edit text and ntext in SQL Server 2000, 2005 and surly for upcoming versions as well.

  • If I had to make the edits only occasionally I would probably use SQL Query Analyzer and just script the UPDATE command.

    If that was too inconvenient I would next look at linking to the database in Access, and to really quick and dirty ease of use I would just use an AutoForm to generate a UI for the table. If you don't have Access I believe OpenOffice Base can connect through ODBC and has similar form building functionality.

  • This is just stupid. Enterprise Manager for SQL Server 2000 handled multiline text just fine. Just silly to go backward and loose functionality that was pre-existing. Not like it is rocket science to reproduce formatted text. Someone really dropped the ball on this one.

What's the difference between an interface and an abstract class?

Duplicate:

When to use an interface instead of an abstract class and vice versa?

Probably one of the most famous software developer job interview questions.

What would be your answer?

EDIT: I'm trying to find out how you would answer this in a real-life situation. Please try to formulate your answer as you would on a real job interview (be complete, but don't be too long, post no links of course).

From stackoverflow
  • I would say that the difference is language dependent, but that in C++ at least, abstract classes are the means by which interfaces are implemented.

  • An abstract class can have member variables, an interface cannot (or, in C++, should not).

    In Java, an "Interface" is a well-defined syntactical element, while in C++ it's merely a design pattern.

    DevSolar : The answer was given as a "answer right now" response. It's not exhaustive, I know, but it's what I came up with ad hoc.
  • This article should answer your question:

    http://www.codeproject.com/KB/cs/abstractsvsinterfaces.aspx

  • As far as job interviews are concerned, I've always heard that the key point is that an interface is a contract; an interface, while not implementing it itself, guarantees functionality.

  • An interface only describes the actual signature of its methods etc. Any class implementing that interface must then provide an explicit implementation.

    An abstract class can contain a partial implementation of its methods etc.

  • Interfaces provide definitions of methods that must be implemented by a class. The purpose of interfaces is to allow you to generalise specific functionality regardless of implementation. You may have an IDatabase interface that defines Open()/Close(). The class that implements that interface may be connecting to a MySQL database or MS Access database. Irrespective of how it accomplishes this task, the goal is still the same...Open database, close database.

    Abstract classes are base classes that contain some abstract methods. They cannot be instantiated they are to be derived from. The purpose of an Abstract class is to allow you to define some generic functionality and sub-class to implement more specific functionality where appropriate.

    So I suppose to summarize. You should use interfaces when the implementation of each class differs completely. Use abstract classes when you have some similar behaviour but need to implement parts differently.

    Hope that helps. James.

Eclipse commands

What is the difference between Ctrl + Shift + R and Ctrl + Shift + T?

Do we have a blog with all eclipse tips/shortcuts?

From stackoverflow
  • Ctrl-Shift-R: Find a resource, which can include XML files, Ant scripts etc.

    Ctrl-Shift-T: Find a type, regardless of the filename. (This can be handy if you're looking for a nested type, or basically if you don't want to see anything other than types...)

    For Eclipse shortcuts, there are lots of Eclipse keyboard cheat sheets. From the search, I like this PDF, this shorter list, and this list with more explanations.

    • T : open a type (Java artifact)
    • R : open any file (any resource)
  • See the top-voted answer to this question (which looks like a duplicate).

    • Ctrl+shift+R is for Open Resource, it searches for all types of files in your projects.
    • Ctrl+shift+T is for Open Type, it looks for Java classes and interfaces.

    A great feature of the Open Type dialog is that you can search for say DataInputStream by typing DIS.

    • Ctrl-Shift-T will find Java types including those imported in JAR files without source (e.g. Java standard API)
    • Ctrl-Shift-R will only find Java types present as source code, but will also find other kinds of source files such as Text and XML.
  • To see all key bindings look under Window -> Preferences -> General -> Keys.

  • Also, Ctrl+Shift+L will show you which keyboard shortcuts are active right then.

    evnafets : And pressing Ctrl+Shift+L again brings up the settings to modify them.
  • Blatant advertising coming up:

    If you like these two search options, you would love nWire. It is an Eclipse plugin for Java developers which provides, among other things, much stronger search. You get to search as you type for methods, fields, packages... practically everything you have in Java.

    As for some more tips, I have written the following article in EclipseZone. I really suggest adding EclipseZone to your RSS feeds reading list. Lots of tips and good stuff there.

How can I detect a Scrollbar presence ( using Javascript ) in HTML iFrame ?

How can I detect a Scrollbar presence ( using Javascript ) in HTML iFrame ?

I have already tried :

        var vHeight = 0;
     if (document.all) {
       if (document.documentElement) {
         vHeight = document.documentElement.clientHeight;
       } else {
         vHeight = document.body.clientHeight
       }
 } else {
   vHeight = window.innerHeight;
 }

 if (document.body.offsetHeight > vHeight) {
   //when theres a scrollbar
 }else{
   //when theres not a scrollbar
 }

And I also had tried :

           this.scrollLeft=1;
 if (this.scrollLeft>0) {
  //when theres a scrollbar
  this.scrollLeft=0;
  }else{
  //when theres not a scrollbar
  return false;
 }

With no success..

I have searched the javascript objets on DOM Inspector, but didn't find anything.

Is is possible to detect a scrollbar presence in a iframe in javacscript ?

From stackoverflow
  • In jQuery you can compare the document height, the scrollTop position and the viewport height, which might get you the answer you require.

    Something along the lines of:

    $(window).scroll(function(){
      if(isMyStuffScrolling()){
        //There is a scroll bar here!
      }
    }); 
    
    function isMyStuffScrolling() {
      var docHeight = $(document).height();
      var scroll    = $(window).height() + $(window).scrollTop();
      return (docHeight == scroll);
    }
    
    Code Burn : Thank you for your answer, but your code only tests when I try to move the scrollbar. I want a to test it on page load.
  • I do not think this can be done if the iframe content comes from another domain due to JavaScript security limitations.

    EDIT: In that case, something along the lines of giving the iframe a name='someframe' and id='someframe2' and then comparing frames['someframe'].document.body.offsetWidth with document.getElementById('someframe2').offsetWidth should give you the answer.

    Code Burn : The iframe content comes from the same domain.
  • The iframe content comes from the same domain.

    No success until now..

  • var root= document.compatMode=='BackCompat'? document.body : document.documentElement;
    var isVerticalScrollbar= root.scrollHeight>root.clientHeight;
    var isHorizontalScrollbar= root.scrollWidth>root.clientWidth;
    

    This detects whether there is a need for a scrollbar. For the default of iframes this is the same as whether there is a scrollbar, but if scrollbars are forced on or off (using the ‘scrolling="yes"/"no"’ attribute in the parent document, or CSS ‘overflow: scroll/hidden’ in the iframe document) then this may differ.

  • I think your second attempt is on the right track. Except instead of this, you should try scrolling/checking document.body.

  • $(window).scroll(function(){
      if(isMyStuffScrolling()){
    //scrolling
      }else{
    //not scrolling
    }
    }); 
    
    function isMyStuffScrolling() {
      var docHeight = $(document).height();
      var scroll    = $(window).height() ;//+ $(window).scrollTop();
      if(docHeight > scroll) return true;
      else return false;
    }
    

    improved-changed a bit from Jon`s Winstanley code

Adding string to bound field

trying to add a prefix before bound field

<label><input type="radio" name="rbGroup" value='r<%#((Type)Container.DataItem).ID %>'/><%# ((Type)Container.DataItem).Action %></label>

but it's putting r next to the button. I want rID

From stackoverflow
  • value='<%#"r"+((Type)Container.DataItem).ID %>'

    CoffeeAddict : I know I tried this but I'll try again...weird.

What's the best way to respond to checkbox clicks on an MVC list view?

I've got a list view in my MVC app that shows a check box next to each entry:

<% For Each item In Model%>
    <%=Html.CheckBox("Selected", item.select_flag, New With {.onclick = "this.form.submit();"})%>
    <%=Html.Hidden("ID", item.id)%>
    <%=item.name%>
    <br/>
<% Next%>

As you can tell from the onclick, I'm submitting the form each time a user clicks a check box. In the controller, my post action looks like this:

<AcceptVerbs(HttpVerbs.Post)> _
Function List(ByVal Selected() As Boolean, ByVal ID() As String) As ActionResult

    For i = 0 To ID.Count - 1
        If Selected(i) Then
            [use ID(i) to update a database row]
        End If
    Next

    Return View(GetTheListOfNamesAndIds())

End Function

So I get an array of Selected values and ID's after each checkbox click. I assumed they would correspond, but I'm finding the two arrays to be out of sync for some reason. It's also a lot of overkill to process the whole list every time a checkbox is clicked, so I'd like to revisit this whole setup.

What's the best way to set this up so that clicking a checkbox will update a specific database row? Can it be done without reloading the list each time?

From stackoverflow
  • Consider wrapping each "row" in it's own AjaxForm or using jQuery to do the update via AJAX, then passing the data required for the action via the route values (or form values) in the AJAX get/post. The AjaxForm will want to update some DOM element with new content, but you could get around this by having it update an error message (with nothing if there is success) rather than the actual row and doing any local changes via javascript. With jQuery AJAX you have a lot more options of how you want to handle it but you may have to implement more code on the client side.

How to interpret this Rails error?

Hi does anyone know what this means? I only get this error when I app is deployed on a server and using PostgresQL. When I'm running locally and testing on SQLite, it is fine. I have a features_simulations join table, I think it is related to it somehow.

Processing AdminController#confirmed (for 211.30.107.155 at 2009-03-25 09:06:21) [GET]
  Session ID: 59d7fdbbb6ec77367c310df0c0928a2a


ActiveRecord::StatementInvalid (PGError: ERROR:  relation "features_simulations_id_seq" does not exist
: SELECT currval('features_simulations_id_seq')):
    /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/abstract_adapter.rb:188:in `log'
    /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/postgresql_adapter.rb:503:in `execute'
    /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/postgresql_adapter.rb:1000:in `select_raw'
    /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/postgresql_adapter.rb:987:in `select'
    /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache'
    /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all'
From stackoverflow
  • I'm not sure, but maybe you need id in features_simulations table. Id isn't needed if you use has_and_belongs_to_many relations. But I think for has_many :through, you need id column in your join table.

    Try adding it in migration:

    add_column :features_simulations, :id, :integer, :primary_key
    
  • I think "features _simulations _id _seq" is a sequence that has to be created in the database. This sequence seems to be generating the id for the table.

  • In Postgres you can use a serial type for an auto-incrementing field, which will automatically create the necessary sequence. You can use an integer type and manually create the sequence if you want, setting a default as the next value from the sequence.

    Seems like the code is trying to find the current value of the sequence and failing because the sequence doesn't exist. I'm not sure if rails automatically creates the right type for Postgres primary keys.

  • ActiveRecord doesn't really use compound keys. The joining tables still have to have an ID in them for atomic deletes and updates. I think everyone else has said the same thing but in a more roundabout way.

    Omar Qureshi : You can add support for CPK's by installing Dr.Nics composite primary keys, that will let do Model.find(1,2) and all it needs is set_primary_keys :field_x, :field_y. See http://compositekeys.rubyforge.org/ for more information
  • Its a problem with the fixtures. Check your Fixture names against your table names. You will get this error if there is a mismatch between the two.