Tuesday, May 3, 2011

filling a boost vector or matrix

Is there a single-expression way to assign a scalar to all elements of of a boost matrix or vector? I'm trying to find a more compact way of representing:

boost::numeric::ublas::c_vector<float, N> v;
for (size_t i=0; i<N; i++) {
    v[i] = myScalar;
 }

The following do not work:

boost::numeric::ublas::c_vector<float, N> 
   v(myScalar, myScalar, ...and so on..., myScalar);

boost::numeric::ublas::c_vector<float, N> v;
v = myScalar;
From stackoverflow
  • Been a while since I used C++. Does the following work?

    for (size_t i = 0; i < N; v[i++] = myScalar) ;
    
    Mr Fooz : That'll work, though it is a full statement as opposed to an expression.
    Mikko Rantanen : True, but it is more compact way which is what you wanted to find based on the post.
    Mr Fooz : Yes, hence the +1.
    j_random_hacker : -1 sorry. You must use "v[i++]" -- "v[++i]" will skip initialisation of v[0] and overwrite the memory past the end of the vector.
    Mikko Rantanen : Ack. Sorry! I did acknowledge that I had to use the correct unary operator but for some reason I kept thinking ++i is the one that increments i after evaluating. Mostly since everyone prefer i++ and the "i += 1" behaviour seems more logical. Fixed now in any case.. And I guess I should thank you as well, I like 600 rep more than 601!
  • Because the vector models a standard random access container you should be able to use the standard STL algorithms. Something like:

    c_vector<float,N> vec;
    std::fill_n(vec.begin(),N,0.0f);
    

    or

    std::fill(vec.begin(),vec.end(),0.0f);
    

    It probably also is compatible with Boost.Assign but you'd have to check.

    Mr Fooz : The STL algorithms do seem to work. Thanks. Boost.Assign doesn't seem to work for me, but I think it's because I'm using a c_vector (const-sized vector) instead of a vector (dynamically sized vector), so push_back doesn't work.
  • I have started using boost::assign for cases that I want to statically assign specific values (examples lifted from link above).

    #include <boost/assign/std/vector.hpp>
    using namespace boost::assign; // bring 'operator+()' into scope
    
    {
      vector<int> values;
      values += 1,2,3,4,5,6,7,8,9;
    }
    

    You can also use boost::assign for maps.

    #include <boost/assign/list_inserter.hpp>
    #include <string>
    using boost::assign;
    
    std::map<std::string, int> months;
    insert( months )
            ( "january",   31 )( "february", 28 )
            ( "march",     31 )( "april",    30 )
            ( "may",       31 )( "june",     30 )
            ( "july",      31 )( "august",   31 )
            ( "september", 30 )( "october",  31 )
            ( "november",  30 )( "december", 31 );
    

    You can allow do direct assignment with list_of() and map_list_of()

    #include <boost/assign/list_of.hpp> // for 'list_of()'
    #include <list>
    #include <stack>
    #include <string>
    #include <map>
    using namespace std;
    using namespace boost::assign; // bring 'list_of()' into scope
    
    {
        const list<int> primes = list_of(2)(3)(5)(7)(11);
        const stack<string> names = list_of( "Mr. Foo" )( "Mr. Bar")
                                           ( "Mrs. FooBar" ).to_adapter();
    
        map<int,int> next = map_list_of(1,2)(2,3)(3,4)(4,5)(5,6);
    
        // or we can use 'list_of()' by specifying what type
        // the list consists of
        next = list_of< pair<int,int> >(6,7)(7,8)(8,9);
    
    }
    

    There are also functions for repeat(), repeat_fun(), and range() which allows you to add repeating values or ranges of values.

    Daniel Newby : The first example seems to be for std::vector (for which it works), not boost::numeric::ublas::vector (for which it does not work).
  • Have you tried this?

    ublas::c_vector v = ublas::scalar_vector(N, myScalar);

Excel pivot table question - How to get subtotals for a row area field?

I have the following situation in database:

Department table holds all departments in the company.

Employee table holds all employees, together with their department id and AllowedAbsenceDays field, which tells us how many days is the employee alowed to be absent in the current year.

Absence table holds all absences that occur in the company. It has employee id and date of absence, and also the reason of the absence (vacation, sick leave, personal days, and potentially more in the future...)

alt text

I am supposed to create a report which lists all employees and shows how many days they are allowed to be absent together with how many days have they been absent (and what is the reason of absence).

I have grouped the data by employee and by reason of absence:

alt text

This is what I am able to get so far, by putting the Department, Employee and Allowed Absence Days in the pivot row area, Reason in the column area and Sum of Days absent in the data area:

alt text

The problem is that I am not able to get the subtotals of allowed absence days per department (which is a firm requirement). The final report should look somewhat like this (i have photoshopped this).

alt text

Is there any way to get these subtotals? Maybe I should prepare the data for the pivot table in a different way? Please note that the Grand Total column should include only the actual days of absence (not the Allowed Absence Days).

Example workbook is available for download here

Thanks to everybody who is still reading :)

P.S. The real case is different (in the problem domain). This is a somewhat contrived example, but the basic problem is the same.

From stackoverflow
  • It's a classic data normalisation issue. The "Allowed absence days" field relates to the employee and not to any particular absence. The repetition of this info in every row (record) is what is causing the problem.

    To achieve this within an Excel sheet and pivot table, you could remove the "Allowed absence days" column and instead use "Allowed absence days" as one of the values in the "Reason" column. It might be beneficial to either show allowed days as a negative amount or to show days for the other absence reasons as negatives. Otherwise the overall grand total will make no real sense.

    Sheet and pivot table should look roughly like this:

    Excel spreadsheet showing restructured data: column A is department, B is employee, C is days absent, D is reason. Allowed absence days is now a reason and associated days absent is negative

    Excel pivot table showing sums of days absent in columns C through E. Allowed absence days in column C

    Repeating the department for every employee doesn't seem ideal but I suppose an employee could change department during the year so you might/might not have to account for that somehow. You also need it to make the pivot table work anyway

    If you absolutely 100% cannot have the grand total affected in this way then I don't believe that you can achieve this with a pivot table. You might want to look at using ADO to query the sheet and use some VBA macros to format and output the results. This page should give you some idea of what's involved

  • Just leave "allowed absence days" blank in all rows, and then at the bottom/top put in rows with the wished for numbers for "allowed absence days" and blanks for the other stuff.

    Then just put your "allowed absence days" in data instead of columns in the pivot and drag it to the left.

    BR R

WSGIServer errors when trying to run Django app

Firstly, here's my script:

#!/usr/bin/python
import sys, os

sys.path.append('/home/username/python')
sys.path.append("/home/username/python/flup")
sys.path.append("/home/username/python/django")
# more path stuff

os.environ['DJANGO_SETTINGS_MODULE'] = "project.settings"

from django.core.servers.fastcgi import runfastcgi
runfastcgi(method="threaded", daemonize="false")

As was described here.

And here's the error I get when trying to run it from shell:

WSGIServer: missing FastCGI param REQUEST_METHOD required by WSGI!
WSGIServer: missing FastCGI param SERVER_NAME required by WSGI!
WSGIServer: missing FastCGI param SERVER_PORT required by WSGI!
WSGIServer: missing FastCGI param SERVER_PROTOCOL required by WSGI!
Status: 404 NOT FOUND
Content-Type: text/html


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html lang="en">
<!-- more html which looks to be the correct output -->

My question is, why aren't those params passed automatically by FastCGI? What am I doing wrong? Running the script from my web server just gives me an internal server error.


Instead of the last two lines of my script, I can use

from flup.server.fcgi import WSGIServer
from django.core.handlers.wsgi import WSGIHandler
WSGIServer(WSGIHandler()).run()

But I still get the exact same error...

From stackoverflow
  • The script expects those params to be passed as environment variables. Since they are not present in your shell environment, and the script is not running in the apache fastcgi environment (which provides them), it complains.

    Do you have access to apache error logs? What do they say?

    Does your host have mod_wsgi support? If so, you could use Django's wsgi handler:

    import sys
    import os
    
    base = os.path.dirname(os.path.abspath(__file__)) + '/..'
    sys.path.append(base)
    
    os.environ['DJANGO_SETTINGS_MODULE'] = 'yourproject.settings'
    
    import django.core.handlers.wsgi
    
    application = django.core.handlers.wsgi.WSGIHandler()
    

    Further instructions can be found on the modwsgi wiki, and the Django docs.

    Mark : This gives me no output at all when running from shell, and the usual internal server error when running from my browser. I'm not sure if my server supports WSGI, nor do I know how to check.
    Mark : Also, I'm not sure what you mean exactly but "not running in a fastcgi environment". My host claims they support FastCGI, and I can save stuff as .fcgi and have it run... what exactly is going on if it's not FastCGI, and how do I phrase this so I can ask my hosting providers to enable it?
    vezult : What I mean is that Apache passes information to your fastcgi process via environment variables. Your shell does not contain that information unless you specifically add it. Since you did not set those variables in your shell environment when you ran your script from the commandline, your fasgcgi script does not have the information that it requires, hence the error. The error you describe at the commandline is not neccessarily related to whatever error is preventing your script from running under apache.
    Mark : Ohh..right.. commandline = no apache. Well, is there anyway I can get some more feedback than "internal server error", or check what apache modules are installed?
    vezult : Sorry without your config info, or error logs I can't help.
  • Solved it. This .htaccess file did the trick, for whatever reason. I swear I tried all this before...

    AddHandler fcgid-script .fcgi
    Options +FollowSymLinks
    RewriteEngine On
    RewriteBase /
    RewriteRule ^(media/.*)$ - [L]
    RewriteRule ^(adminmedia/.*)$ - [L]
    RewriteCond %{REQUEST_URI} !(cgi-bin/myproject.fcgi)
    RewriteRule ^(.*)$ cgi-bin/myproject.fcgi/$1 [L]
    

What is switch action & forwardAction in struts

What is switch action & forwardAction in struts

From stackoverflow
  • Found a blog post while searching over the net. Here is the link.

    Ashvin Ranpariya : thanx for giving me ur precious time and answer .
    Adeel Ansari : No worries. You are welcome. :)

Icon of project in Visual Studio

I want certain projects in my solution opened in Visual Studio have different icons..Just like how Visual Studio displays different icons for class project, web site project etc

I am not talking about changing the icon of the winforms or other such app being produced

I understand I should arrange my code into proper namespaces / folder structure...and can even arrange the project into solution folders...but above will communicate better to my teammate for time being...(unfortunately we have inherited a reasonable code base and things needs to be cleaned along further development)

From stackoverflow
  • If you look in your SLN file, the project has a guid assigned to it.

    all the c# projects have

    Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}")
    

    and all the build folders have ..

    Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}")
    

    These define the project type, which you can hunt down in your registry ....

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Projects
    

    With this finding, I would say what you are looking for is VERY possible, but you'll have to create custom project types for visual studio. This may not be as daunting as it sounds, if you start with on of the default packages.

    Here is a pointer to a sample, where in fact they mention setting the custom icon in step 3.

    Building a Custom Project Wizard in Visual Studio .NET

Dark Blue styling

This is not a programming question, so I apologize if it doesn't really fit in this website.

I want to create a 3 column website where the dominating background color is dark blue, and the secondary side column is more noticeable than the tertiary column, and where the primary content area is white.

I'm thinking about this and I can't think of any ways or color choices to make it look good.

Any sources of inspiration or just color suggestions or overall style suggestions would be appreciated!

From stackoverflow
  • Before this gets smacked down:

    Use http://colorschemedesigner.com/

    It will give you an example of what the website will look like in the color scheme and it will also generate the template for you which is nice.

    altCognito : http://colorschemedesigner.com/#40520w0w0w0w0 -- I'll even set you up with dark blue. :)
  • This site is useful for helping with colour selections.

  • Don't miss out on http://www.templatemonster.com/.

  • I have found http://www.colourlovers.com/ to be about the best with coming up with color schemes.

Is this a good JavaScript object model? (phone book project)

I need to create a phoneBook. Thanks to a form, I can retrieve a person's data. I need to use objects for that purpose. I created a phoneBook() object with the help of a method that enables to add a person in the phoneBook.
I decided (it wasn't asked for though) to divide the "person" concept in 2, which results in a "Person" object and an "AddressPerson" object (a same person can have two houses: "My tailor is rich!" :-))

Is it a good way to declare the Person object?
Can we create a Person object without address and add it later on like I did with the "Person2" object?

If someone could help me, I'd be very obliged. Thank you very much in advance!"

function phoneBook(){
  this.Liste = new Array(); 
}

phoneBook.prototype.Add = function(){
   Liste.push(new Person(aLastName,aFirstName,aAddress));
}

function Person(aLastName,aFirstName,aAdd){
  this.LastName   = aLastName;
  this.FirstName  = aFirstName;
  this.Address = 
    new AddressPerson(aAdd.Street,aAdd.CP,aAdd.Town,aAdd.NumTel,aAdd.Email);
}

function Person2(aLastName,aFirstName){
  this.LastName   = aLastName;
  this.FirstName  = aFirstName;
  this.Address = 'unknow';
}

function AddressPerson(aStreet,aCP,aTown,aNumTel,aEmail){
  this.Street = aStreet;
  this.CP    = aCP;
  this.Town = aTown;
  this.NumTel = aNumTel;
  this.Email= aEmail;
}
From stackoverflow
  • Some suggestions:

    • Change your class (PhoneBook) to be Capitalized, and your methods/properties (lastName, add()) to be lower-case
    • The 'a' prefix on method parameters is not needed, since in Javascript this is never implicitly used.
    • Methods should take objects as parameters. For example, add() should be add(person) where person is a pre-constructed Person object.
    • Why do you need Person2? It seems redundant.
    • In constructor for Person, you copy every field of the Address. Generally just doing this.address = address would be OK. But if you want to ensure every Person has its own instance of Address, provide a clone() method on Address.
  • Take a look at JSLint. This work sort'a like W3C's Markup Validator.

    JSLint is created by Douglas Crockford--a Yahoo! JavaScript evangelist and the dude who invented JSON.

    What JSLint is all about:

    "JSLint is a JavaScript program that looks for problems in JavaScript programs.

    When C was a young programming language, there were several common programming errors that were not caught by the primitive compilers, so an accessory program called lint was developed which would scan a source file, looking for problems.

    As the language matured, the definition of the language was strengthened to eliminate some insecurities, and compilers got better at issuing warnings. lint is no longer needed.

    JavaScript is a young-for-its-age language. It was originally intended to do small tasks in webpages, tasks for which Java was too heavy and clumsy. But JavaScript is a very capable language, and it is now being used in larger projects. Many of the features that were intended to make the language easy to use are troublesome for larger projects. A lint for JavaScript is needed: JSLint, a JavaScript syntax checker and validator.

    JSLint takes a JavaScript source and scans it. If it finds a problem, it returns a message describing the problem and an approximate location within the source. The problem is not necessarily a syntax error, although it often is. JSLint looks at some style conventions as well as structural problems. It does not prove that your program is correct. It just provides another set of eyes to help spot problems.

    JSLint defines a professional subset of JavaScript, a stricter language than that defined by Edition 3 of the ECMAScript Language Specification. The subset is related to recommendations found in Code Conventions for the JavaScript Programming Language.

    JavaScript is a sloppy language, but inside it there is an elegant, better language. JSLint helps you to program in that better language and to avoid most of the slop."

  • There are some things you can do for more succinct code, as well as following today's accepted best practice.

    You could change this line

      this.Liste = new Array();
    

    to

      this.Liste = []; // shorthand for creating an array. You can do the same with an object with {}
    

    I generally use JSON objects for this sort of thing. Douglas Crockford's JavaScript is a very useful resource.

  • Thanks a lot
    I changed my code like that :

    function PhoneBook(){<br>
        this.liste = [];<br>
    }
    
    PhoneBook.prototype.add = function(person){<br>
       this.liste.push(person);<br>
    }
    
    function Person(lastName,firstName,address){<br>
        this.lastName   = lastName;<br>
        this.firstName  = firstName;<br>
        this.address = address;<br>
    }
    
    function AddressPerson(street,cp,town,numTel,email){<br>
        this.street = street;<br<
        this.cp     = cp;<br>
        this.town   = town;<br>
        this.numTel = numTel;<br>
        this.email  = email;<br>
    }
    
    // tests for the others :-))
    
    var phone = new PhoneBook();
    alert(phone.liste.length);
    var person = new Person(
        "aaaaaa",
        "bbbbbbb",
        new AddressPerson("zzzzz","87","rrrrr","22222","eeeee@uk.co")
    );
    alert(person.address.street);
    phone.add(person);
    alert(phone.liste.length);
    alert(phone.liste[0].address.numTel);
    

    But I don't know how to do this, which Levik said in his answer:

    But if you want to ensure every Person has its own instance of Address, provide a clone() method on Address

    nickf : hi gmatu - you can edit your answer by clicking the "edit" link just above the comments.
  • Here's my suggestion:

    function PhoneBook( person /*, ... */ ){
        this._liste = [];
    }
    
    PhoneBook.prototype.add = function(){
        var l = arguments.length;
        while(l--){
         this._liste.unshift( arguments[l] );
        }
    }
    
    function Person( p ){
        this.lastName   = p.lastName;
        this.firstName = p.firstName;
        this.address  = p.address === 'unknown' ? new Address( p.address ) : 'unknown';
    }
    
    function Address( a ){
        this.street = a.street;
        this.CP     = a.CP;
        this.town  = a.town;
        this.numTel = a.numTel;
        this.email = a.email;
    }
    
    harto : I don't think that changing the Person and Address constructors to accept an object is an improvement. That just makes construction of those objects unnecessarily complex IMO. Also, I think your add function will miss the 0th element of the arguments array?
  • Data model suggestions:

    • Rename AddressPerson to Address. You could hypothetically pass Address objects around that don't correspond to a Person.

    • Consider moving the email field into Person - as you mentioned, you might have multiple people at one address (they might not necessarily share email addresses).

    • Pass a fully initialised Address into the Person constructor.

    • Remove Person2 function, and make address an optional attribute of Person. E.g. assignment becomes this.address = (address === undefined) ? "unknown" : address;

    From a style/convention perspective, levik's response is good - I endorse those suggestions.

  • Commenting on my own post comment, before I registered:

    @harto - I don't think that changing the Person and Address constructors to accept an object is an improvement. That just makes construction of those objects unnecessarily complex IMO. Also, I think your add function will miss the 0th element of the arguments array?

    I do think that it is good to change the constructors, it allows automatic cloning for one and, if your using it, allows creation of object directly from JSON. Anyway, plenty of applications. If needed, you could always have an arguments length verification and branch on the proper behavior.

    As for the the add function, it won't miss the 0th element because it is post-decremented which mean that the expression returns the value before the decrement. It is the fastest loop you can devise in javascript.

    harto : Quite right on the loop index, my mistake :)

CSS Background Image Load Behavior

Say there is a css style declaration to an element which has been set display:none (not displayed on screen) and it also has a background image set.

Lets say this element is displayed on an event such as a mouse click. Would the browser load these images even before the element is displayed? Is this load behaviour consistent across browsers?

From stackoverflow
  • It is not loaded automatically but you can use a Javascript trick to preload an image.

    pic = new Image(); 
    pic.src="http://url/imagetoload.png";
    

    As a better solution, you may create a div with a negative positioning value (such as left: -1000px) and assign imagetoload.png to its background to load the image.

    yfel : thanks,vote for your first answer in this site ^^
  • No, browsers I've tested before do not.

    if you do want to load the image, try using background-position: -1000px -1000px which does work. It won't show the image, but it will be preloaded. However, the element will be in the normal flow, i.e. not hidden.

    yfel : i just want to make sure that images which are hidden from the document are not loaded by the browser.So this is ture for major browsers?
    alex : Yes, if a background image is on a hidden element, it won't be shown. However, if you delay hiding an element with JS for example, there's a chance that image will be loaded.

Deferred loading of TreeView in .NET

This is a System.Windows.Forms.TreeView, not a WPF or Web TreeView.

I must be missing something, because I can't believe this is that difficult.

I've got a TreeView on my form. Loading all the data to populate the TreeView is very slow, so I want to load just the top-level nodes and then fill in the children as the user expands nodes. The problem is, if a node doesn't have any children it doesn't display the + sign next to the node, which means it can't be expanded, which means I can't capture an Expanding event to load the children.

Years ago when I was using PowerBuilder you would set a HasChildren (or similar) property to true to essentially 'lie' to the control and force it to display the + and you could then capture the Expanding event. I've not been able to figure out how to force a + to appear on a node when there are no child nodes.

I've tried an approach where I add a dummy node to each node and then on expanding check if the dummy node is present and remove it then load the children, but for various reasons that aren't worth getting into here that solution is not viable in my situation.

I've Googled for various combinations of c#, treeview, delayed, deferred, load, force, expansion, and a few other terms that escape me now with no luck.

P.S. I found the TreeViewAdv project on SourceForge, but I'd rather not introduce a new component into our environment if I can avoid it.

From stackoverflow
  • A possible solution is to stay one step ahead of the treeview:

    private void Form1_Load(object sender, EventArgs e)
    {
     // initialise the tree here
     var nodeSomething = treeView1.Nodes.Add("Something");
     nodeSomething.Nodes.Add("Something below something");
    
     treeView1.AfterExpand += AfterExpand;
    }
    
    private void AfterExpand(object sender, TreeViewEventArgs e)
    {
     foreach (TreeNode node in e.Node.Nodes)
     {
      // skip if we have already loaded data for this node
      if (node.Tag != null) 
       continue;
      node.Tag = new object();
      node.Nodes.AddRange(GetDataForThisNode(node));
     }
    }
    
  • I've also wondered how to get the + to show up next to childless nodes, but I never found a good way. My solution was to handle the MouseDoubleClick event like so:

    Private Sub tvwMain_MouseDoubleClick(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles tvwMain.MouseDoubleClick
        Dim oNode As TreeNode
    
        oNode = tvwMain.GetNodeAt(e.X, e.Y)
    
        If oNode IsNot Nothing Then
            If oNode.Nodes.Count = 0 Then
                'add children here
            End If
        End If
    End Sub
    
  • I agree with Chris, I've had to do this vary thing. Load the top nodes and then capture the click event, make sure the click was on a selected node, and then populate the node, and then finally expand it.

    If it's required to have the plus, then load the top nodes and drop a dummy node in it. Make sure you capture the click or expand event, clear the nodes and then repopulate it.

    X-Cubed : I second the dummy node idea. I'm using that myself and setting the Tag property to a specific value to differentiate the "Loading..." dummy nodes from the others. In BeforeExpand, I check for the existence of the dummy node and replace the contents. It works well for me.
  • Have you tried enclosing your node additions in BeginUpdate()and EndUpdate() calls? This disables any painting and as such allows you to add a large number of nodes to the TreeView without as significant delays.

    MSDN: TreeView.BeginUpdate Method

    Lucas : i think the OP meant that loading the data itself (from DB, for example) was slow, not adding the tree nodes, so they want to load the data on-demand (when user expands each tree node).
    Craig W. : Lucas comment is correct. The TreeView itself populates fast enough, it's getting the data that is slow.
  • Load the first 2 levels at startup and load 2 levels down when a node is expanded.

  • You may have better luck using TreeViewAdv (on sourceforge).

Migrating password encryption schemas

I am possibly taking over an app that literally just encrypts user passwords by doing md5( password )

They have ~2000 users to date, so I'm wondering how I can migrate those passwords (or can I?) to a stronger encryption schema (e.g. involving a salt, user-specific hash, and their password, all encrypted with sha1, bcrypt, whatever)

Thanks.

From stackoverflow
  • MD5 is a cryptographic hash function, not necessarily an encryption method. A hash is designed to only be performed in one direction, and cannot be reversed other than by dictionary attack. As an example, you can try out this hash database lookup if you're feeling frisky.

    You will probably want to save these old passwords in a separate column, then when the users login to the "new" system, compare the MD5'ed version of that password with the old one, and if the digest matches, perform SHA1 with a salt on that password and store that in a separate column.

    Alternatively, and probably a better approach, is the force the users to change passwords... and when they enter their new one, use the new hash algorithm on it instead.

    Kyle : thanks! i was thinking something similar - in the interest of avoiding a force the use to change their password system, i was thinking of moving the existing passwords to an additional column, as you mention, and just re-encrypting on the fly. luckily i'm dealing with 2k users vs 20k or 200k.

General Question on Large J2EE web application, clearly separate application by modules. Possible use of business delegate pattern

Before I ask my "general" question, I wanted to present some quotes to my high-level general understanding of the business delegate pattern:

"You want to hide clients from the complexity of remote communication with business service components."

"You want to access the business-tier components from your presentation-tier components and clients, such as devices, web services, and rich clients."

Lets say you have large J2EE web application. When I mean large, I mean that the application has been developed for several years and by many different people and departments. The system wasn't designed well for growth. Right now, lets say the application is configured with one web application.

It would have been great if the application were more modular. But it isn't. One piece of code or library can bring down the entire system. Testing and other means are used to prevent this, but ideally this is just one monolithic web application.

Here is my question; how do you normally avoid this type of design with J2EE apps, where the application grows and grows and one separate application can bring down everything.

I am not familiar EJBs and don't plan on using them for anything too serious. But, the concept of the Business Delegate and Service Locator Pattern seems like a good fit.

Lets say you have a shopping cart screen (same web app) and a set of screens for managing a user account (same web app). In your web-tier (some kind of MVC action method), it seems you could have a Locator that will get the business specific interface, invoke those interfaces per module and if something goes wrong with that module then it doesn't kill other code. Lets say the shopping cart module fails (for whatever reason), the user account screen still works.


From stackoverflow
  • In a healthy software development environment, the application "grows and grows" because bussiness people ask for changes and new features and usually there is nothing wrong with it.

    Let's forget about the scalability problem, and focus in the design problem, with leads to a very painful maintenance.

    What you need to do is separate your application in modules that makes sense to the business guys (and the users). Each module of your system should have a clear business mean that reflects the application domain.

  • ALl of those ideas are fine, and would be perfectly reasonable if you were starting from scratch. Unfortunately, you're not starting from scratch but dealing with a prototypical Big Ball of Mud. Believe me when I say that I feel your pain.

    More important than the specific pattern is to impose some sort of order, any sort of order, on this mess. More important than your specific design choice - can you reaonably impose your design choice on the entire system so that there's a single unifying paradigm that carries through the whole application?

    If your design satisfies the design requirements and conceptually simplifies the application then it's valid.

  • The best way to improve a system like this is to slowly make it better. If your system is anything like the systems I've worked on, changing the code often leads to bugs. Automated tests are a way to reduce the chance of introducing new bugs, but often the code wasn't written with testing in mind, and changing the code to make it easier to write tests can lead to bugs.

    The way around this problem is to introduce some automated integration tests, and use those tests as a safety net as you carefully refactor the code and introduce tests at a lower level. Making code more testable often results in introducing interfaces and abstractions that make the code easier to work with. It also often requires separating business logic from presentation logic (since testing both together is painful) and breaking out our code into modules (since classes with many dependencies can be hard to test).

    When you write new code for your system, try to write unit tests while you write the code. This is much easier (and therefore less frustrating) than writing tests for legacy code, and gives you a chance to see where you might go when you refactor your legacy code.

    I know of two excellent bugs on this subject: Working Effectively with Legacy Code by Robert C. Martin and Refactoring to Patterns by by Joshua Kerievsky.

    Finally, consider using a dependency injection framework, like Spring or Guice. Dependency injection makes it easier to make your code testable with unit tests. It also makes it easier to follow good design practices like the Dependency Inversion Principle and the Interface Segregation Principle.

  • The best way to avoid such a mess when you were given a chance to restart is:

    • Use the Spring Framework throughout.
    • Pay attention to high-cohesion, low coupling: make sure there are no circular dependencies between package (com.myapp.packageA depends on com.myapp.packageB which depends on com.myapp.packageA). There are some great free tools like Architecture Rules that can verify that for you automatically.

    With these two rules you'll already be forced to follow sound good design principles. Otherwise, design needs constant attention. The point of any design effort is to reduce uncertainty, so you basically have to know why you're designing in the first place.

Linuxlike Ctrl-C (KeyboardInterrupt) for the Windows cmd line?

I've been forced into using a command line in windows and wondered if there were Linux-like keyboard shortcuts? I googled and didn't find what I was looking for.

Things like ^C, ^Z and such? Thanks all!

From stackoverflow
  • Ctrl-C does a similar thing in windows as it does in linux.

  • You can trap ^C on Windows with SIGINT, just like Linux. The Windows shell, such as it is, doesn't support Unix style job control (at least not in a way analogous to Unix shells), and ^Z is actually the ^D analog for Windows.

  • Try Ctrl+Break: some programs respond to it instead of Ctrl+C.

    Note also that nothing can cancel synchronous network I/O (such as net view \\invalid) on Windows before Vista.

Collapse projects in Solution Explorer

How can I collapse / expand all projects in Solution Explore using keyboard?

From stackoverflow
  • Power Commands for VS2008 has a command to do this, plus lots of other neat tools

    http://code.msdn.microsoft.com/PowerCommands

    You could also write a macro to do it, and assign it to a key combination

    http://weblogs.asp.net/israelio/archive/2007/09/06/visual-studio-collapse-selected-node.aspx

  • Press CTRL+ALT+L to focus on Solution Explorer. If it's not on the screen, or it's collapsed, this will focus on it.

    Then press your left arrow key as many times as it takes to get to the solution, and with another left, it'll collapse the whole tree.

    I jsut tested VS2008, and you can also press HOME, to get to the top of the tree.

    If what you want, is the solution open, and all the project files closed, then ....

    once you're home in the tree view, press down,left for each open project and|or open solution folder (if you have those).

    The power tools that @Jason mentioned, might have a different solution, but this is how you can navigate the Solution Tree without any add-ins.

UDP broadcast and unicast through the same socket?

I have a Linux application that opens a UDP socket and binds it to a port. I haven't had any problem sending unicast packets through the socket. I had occasion to send a broadcast packet, so I enabled SO_BROADCAST, which allowed the broadcast packets to pass, but then I noticed that the unicast packets were being broadcast as well. Is this expected behaviour for a UDP socket, or is it more likely that I've misconfigured something?

From stackoverflow
  • I have not done much hands on programming here, but you probably need to provide more information about the library, OS version, code, etc. Maybe a code sample?

    If I remember the books I read, if you set the flag on the socket, that is going to affect all datagrams sent from the socket, because the socket is a basically a data structure of network flags + a file descriptor.

  • From what I understand SO_BROADCAST is a socket option. So if you enable it on your socket this socket will broadcast. I guess you will need to open different sockets if you want to do unicast and broadcast from the same code.

    Dave Causey : Thanks. That's what I was digging for. I figured I was expecting it to do something it wasn't designed to do. A second socket should do the trick, though I'm guessing I'll have to bind to a second port (bummer).
  • I have figured out the same issue on Linux about having a socket getting unicast and broadcast at the same time. I solved the problem as follow (pseudo-code):

    1. sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)
      • Open the socket
    2. setsockopt(sock, SOL_SOCKET, SO_BROADCAST, &1)
      • Allows incoming and outgoing broadcast from this socket
    3. bind(sock, bindaddr, sizeof(struct sockaddr) with

    bindaddr.sin_family = AF_INET

    bindaddr.sin_port = <YourPort>

    bindaddr.sin_addr.s_addr = INADDR_ANY

    • Get all incoming messages on any card for <YourPort>

    The caveat is that there is no filtering (see caveat in 3.). So you will get all messages. The sent messages are either unicasted or broadcasted depedning on the given address in the sendto().

Is there any way to make an existing type implement an interface?

Let's say I write a really kick-ass interface. So kick-ass, in fact, that I'd like some of the builtin types I use to implement them, so that whatever code I write that uses this interface can also use the builtin types.

public interface IKickAss
{
    int Yeahhhhhhh() { get; }
}

public static class Woot
{
    public int Bar(IKickAss a, IKickAss b)
    {
        return a.Yeahhhhhhh - b.Yeahhhhhhh;
    }
}

// What I'd like to do, sort of.
public partial struct Int32 : IKickAss
{
    public int Yeahhhhhhh
    {
        get
        {
            return this;
        }
    }
}

I've wanted this many times for many reasons. The most recent is that I've implemented radix sort for "uint", but have also created a simple "IRadixSortable" interface that requires the property "uint RadixKey { get }". This means that I've basically duplicated the sorting code: once for uint arrays, the other for IRadixSortable arrays. I'd rather just write one by making the uint type implement IRadixSortable. Is there some way to do this... maybe using reflection?

Haskell can do this (i.e. typeclasses can be instantiated on any data type at any time,) and I think that's one very important reason why it's so powerful. C# could really use this feature. Maybe call it "extension interfaces" :)

From stackoverflow
  • Yes and no.

    You're probably looking for Duck Typing, see the following article.

    kinghajj : Thanks for the link. I'll look into it later to how I can use it to solve my problem.
  • What about generics? The basic numeric types all implement ICopmarable, so you could write an IComparer implementation to pass to the array's Sort method.

    Or perhaps an extension method.

    kinghajj : That's not useful because Radix Sort is a non-comparison sort, so using IComparable couldn't help much.
    SnOrfus : and the .net framework uses your comparer within its quicksort.
  • Not with interfaces, but you could do something similar using Extension Methods. You just wouldn't get the "contract"

  • If you control creation of instances you can inherit from the class and implement the interface in the new inheritor class.

  • Others have answered the main question, but regarding your specific radix sorting scenario, you could also look at the kind of pattern used in the .NET Framework for comparisons, hashing, etc.: allow the user of your radix sort method to pass in an object that controls the sorting order. E.g. create an IRadixKeyProvider interface (analogous to IHashCodeProvider or IComparer) with a RadixKey property.

    It's not ideal because users would have to pass in an IRadixKeyProvider each time they radix-sort a collection, rather than defining the radix key once and for all on the collection type. (Though you could probably mitigate this by creating an overload of your sort method for predefined types which created the relevant IRadixKeyProvider internally and then forwarded to the more general method.) And of course it doesn't address the more general scenario (yeah, I want typeclasses too!). But at least it saves you from duplicating the radix sorting code.

  • I can't see a solution for intrinsic types, but for other types (ie. ones created by your or someone else) you could just subclass it and implement the interface of your choice.

    public interface ISortable
    {
        // ... whatever you need to make a class sortable
    }
    
    public class ExistingType
    {
        // whatever
    }
    
    public class NewType : ExistingType, ISortable
    {
        // ...
    }
    

    unless of course, if you have access to the existing type... then just have it implement your interface.

Why does this Flex "Hello World" app not pick up the result of my remote HTTPService call?

When I go to http://localhost:3000/hello/sayhello, Rails outputs:

hello world!

as HTML.

But when I run this Flex remote "Hello World" app, I see a button and a textbox but it does not pick up the output of the HTTPService call to my Rails url. Any ideas why?

<?xml version="1.0" encoding="utf-8"?>
<mx:Application
    xmlns:mx="http://www.adobe.com/2006/mxml"
    layout="vertical"
    backgroundGradientColors="[#ffffff, #c0c0c0]"
    width="100%"
    height="100%">
    <mx:HTTPService
        id="helloSvc"
        url="http://localhost:3000/hello/sayhello"
        method="POST" resultFormat="text"/>
    <mx:Button label="call hello service"
        click="helloSvc.send()"/>
    <mx:TextInput text="{helloSvc.lastResult}"/>
</mx:Application>
From stackoverflow
  • I'm not positive, as I can't see the XML that your service would reply with but I think it's probably one of two things:

    1) Your has no element to format the request to your service. Check this out for a template...Flex 3 Help

    That could either cause the request to never properly be made or your service to choke on the request and never return a result. You'd have to debug the service to see how it's handling the request.

    2) Your service is returning an XML result with namespaces. In that case, you're going to need to check out Another Flex 3 Help about how to handle XML results in the e4x format.

    Bijou : Stupid mistake. I was calling file:///C:/public/bin/flex.html instead of localhost:3000/bin/flex.html. Thanks for your help.
  • I copied and pasted your code into a new Flex App, modifying the URL to point to a script I know works with Flex apps, and it worked just fine.

    I also changed my server-side script to print 'hello world' with a newline, and that worked fine as well.

    Your Flex code appears to be working OK with plain-text, but something is obviously not connecting between the data display and the data itself. I'm not experienced with Rails, but I wonder if your server is outputting data which cannot be parsed, and any exceptions are getting swallowed.

    Here's my suggestion: change your 'sayhello' script so it simply prints a content-header and 'hello world' -- all in plain-text. Make sure it outputs in the browser, and then see if it also works in the Flex app. If it does, your Rails app is probably outputting content which needs to be parsed, as opposed to simply set to the text input. If it doesn't, you'll need to do more debugging.

    BTW, I tried this with both plain-text output and XML output. In both attempts, I was able to view the content in the text-input field.

    Bijou : Stupid mistake. I was calling file:///C:/public/bin/flex.html instead of http://localhost:3000/bin/flex.html. Thanks for your help.
    bedwyr : Glad you got it worked out :)

Rails: How should Phusion Passenger and I18n.locale behave?

I have a Rails 2.2 web app running on Passenger / REE

I set the default locale in config/environment.rb

config.i18n.default_locale = 'en-GB'

The first request seems to have no locale set in I18n.locale

If I the visit a page with a before_filter that sets I18n.locale every subsequent visit to any controller even if it doesn't have that same before_filter setting the I18n.locale get an I18n.locale of whatever was set, say, 'en-US'.

On Mongrel with the same code each request gets a locale of 'en-GB', the default, until explicitly set by a before_filter.

Any help appreciated working out if this is normal Passenger behaviour.

Thanks,

-Tim

From stackoverflow
  • Mongrel resets the locale "automatically" because it handles almost every request with a different thread, while Phusion Passenger handles everything with the same thread.

    As a hack to this, in your before filter, you can always set the locale to nil before setting the locale based on other parameters. This way it will always reset the last value the locale was set to back to nil.

    Just insure that you're setting the locale on each request for the user's preferred locale.

    I had this same problem because I was only setting the locale if an incoming parameter told the site to change the locale. I guess this is something that you always have to set regardless if you're using Phusion Passenger.

    Micah : I had the same problem, and explicitly setting i18n.locale to nil on each request solved it. Thanks!
    Espen : Setting to nil didn't work for me, had to set I18n.locale = I18n.default_locale

Turn off Debug Logging during Release in Log4net

When I compiled my application in release mode, I found that the Log4Net still logs debug information; any idea how to fix this?

This is my App.Config file:

<configuration>

  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,Log4net"/>
  </configSections>


  <log4net>
    <root>
      <level value="DEBUG" />
      <appender-ref ref="LogFileAppender" />
    </root>
    <appender name="LogFileAppender" type="log4net.Appender.RollingFileAppender" >
      <param name="File" value="C:\Documents and Settings\test\Application Data\Log.txt" />
      <param name="AppendToFile" value="true" />
      <rollingStyle value="Size" />
      <maxSizeRollBackups value="10" />
      <maximumFileSize value="10MB" />
      <staticLogFileName value="true" />
      <layout type="log4net.Layout.PatternLayout">
        <param name="ConversionPattern" value="%-5p%d{yyyy-MM-dd hh:mm:ss} - %m%n" />
      </layout>
    </appender>
  </log4net>

Did I miss anything?

From stackoverflow
  • There's nothing in your App.Config file to tell log4net to do things differently in release or debug mode. If you want logging to be different between the two builds, you have to change your configuration file between the two builds.

    Your best bet is probably to create one App.Config for Release, one for Debug, and then follow the advice in the StackOverflow question:

    NOTE: The difference between your release and debug App.Config will be the following line in the debug version

    <level value="DEBUG" />
    

    versus the following line in the release version (or of course you could choose ERROR or FATAL if you want):

    <level value="INFO" />
    
  • Maybe try something like this instead? Set to whatever minimum level you want to receive.

    <level value="WARN" />
    

What is the difference between mutex and critical section?

Please explain from Linux, Windows perspectives?

I am programming in C#, would these two terms make a difference. Please post as much as you can, with examples and such....

Thanks

From stackoverflow
  • In Windows, a critical section is local to your process. A mutex can be shared/accessed across processes. Basically, critical sections are much cheaper. Can't comment on Linux specifically, but on some systems they're just aliases for the same thing.

  • A mutex is an object that a thread can acquire, preventing other threads from acquiring it. It is advisory, not mandatory; a thread can use the resource the mutex represents without acquiring it.

    A critical section is a length of code that is guaranteed by the operating system to not be interupted. In pseudo-code, it would be like:

    StartCriticalSection();
        DoSomethingImportant();
        DoSomeOtherImportantThing();
    EndCriticalSection();
    
    Zifre : Am I incorrect? I would appreciate it if down voters would comment with a reason.
    Mikko Rantanen : +1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
    Michael : I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
    Adam Rosenfield : I don't know why you got downvoted. There's the _concept_ of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
    Mikko Rantanen : At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
    Jason Coco : Well, he asked for as much detail as possible, and specifically said Windows and Linux so sounds like concepts are good. +1 -- didn't understand the -1 either :/
  • Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.

    Critical Section Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.

    mutex* Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)

    configurator : Also, mutexes can be shared across processes.
  • For Windows, critical sections are lighter-weight than mutexes.

    Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.

    Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).

    I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.

    Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.

    HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
    CRITICAL_SECTION critSec;
    InitializeCriticalSection(&critSec);
    
    LARGE_INTEGER freq;
    QueryPerformanceFrequency(&freq);
    LARGE_INTEGER start, end;
    
    // Force code into memory, so we don't see any effects of paging.
    EnterCriticalSection(&critSec);
    LeaveCriticalSection(&critSec);
    QueryPerformanceCounter(&start);
    for (int i = 0; i < 1000000; i++)
    {
        EnterCriticalSection(&critSec);
        LeaveCriticalSection(&critSec);
    }
    
    QueryPerformanceCounter(&end);
    
    int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
    
    // Force code into memory, so we don't see any effects of paging.
    WaitForSingleObject(mutex, INFINITE);
    ReleaseMutex(mutex);
    
    QueryPerformanceCounter(&start);
    for (int i = 0; i < 1000000; i++)
    {
        WaitForSingleObject(mutex, INFINITE);
        ReleaseMutex(mutex);
    }
    
    QueryPerformanceCounter(&end);
    
    int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
    
    printf("Mutex: %d CritSec: %d\n", totalTime, totalTimeCS);
    
    1800 INFORMATION : beats me - maybe you should post your code. I voted you up one if it makes you feel better
    ApplePieIsGood : Well done. Upvoted.
    Troy Howard : Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
  • In addition to the other answers, the following details are specific to critical sections on windows:

    • in the absence of contention, acquiring a critical section is as simple as an InterlockedCompareExchange operation
    • the critical section structure holds room for a mutex. It is initially unallocated
    • if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
    • if you anticipate high contention, you can allocate the critical section specifying a spin count.
    • if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
    • if the spin count expires, the mutex will be allocated
    • when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread

    In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.

    Promit : Unfortunately a Window critical section involves doing a CAS operation *in kernel mode*, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
    Michael : That is definitly not true. CAS can be done with cmpxchg in user mode.
    1800 INFORMATION : I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
  • From a theoretical perspective, a critical section is a piece of code that must not be run by multiple processes at once because the code accesses shared resources.

    A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.

    Semaphores and Monitors are common implementations of a mutex.

    In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out - Using concurrency for scalability for an chart of the costs of different mutex implementations.

    Availiable synchronization primitives.

    The lock(object) statement is implemented using a Monitor. See MSDN for reference.

    In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.

  • Just to add my 2 cents, critical Sections are defined as a structure and operations on them are performed in user-mode context.

    ntdll!_RTL_CRITICAL_SECTION
       +0x000 DebugInfo        : Ptr32 _RTL_CRITICAL_SECTION_DEBUG
       +0x004 LockCount        : Int4B
       +0x008 RecursionCount   : Int4B
       +0x00c OwningThread     : Ptr32 Void
       +0x010 LockSemaphore    : Ptr32 Void
       +0x014 SpinCount        : Uint4B
    

    Whereas mutex are kernel objects (ExMutantObjectType) created in the Windows object directory. Mutex operations are mostly implemented in kernel-mode. For instance, when creating a Mutex, you end up calling nt!NtCreateMutant in kernel.

    Ankur : What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
    Michael : Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
  • The 'fast' Windows equal of critical selection in Linux would be a futex, which stands for fast user space mutex. The difference between a futex and a mutex is that with a futex, the kernel only becomes involved when arbitration is required, so you save the overhead of talking to the kernel each time the atomic counter is modified. A futex can also be shared amongst processes, using the means you would employ to share a mutex.

    Unfortunately, futexes can be very tricky to implement (PDF).

    Beyond that, its pretty much the same across both platforms. You're making atomic, token driven updates to a shared structure in a manner that (hopefully) does not cause starvation. What remains is simply the method of accomplishing that.