Monday, April 11, 2011

Changing ListBox selection is not moving changes from BindingSource to DataSet.

The answer to this question may turn out to be, "Don't use typed DataSets without using the Binding Navigator." I am curious, however, about the behavior I'm seeing.

So, I created a form where every control was dragged from the data sources explorer. I deleted the Binding Navigator because it is ugly and inappropriate for this particular form. I added a ListBox and set the DataSource to the BindingSource.
Notice that the ListBox is not bound, it is just filling itself from the BindingSource. By some magic that I wasn't counting on, moving around in the ListBox is navigating the BindingSource and all other controls are updating accordingly.

I can make changes to the bound controls and explicitly call EndEdit on the BindingSource and then update the DataSource through the Table Adapter. Works great.

When I make changes in the bound controls and click a new option in the ListBox, I want to be able to check for changes and prompt to save or reset if there are any.

Here is the strange part that I haven't been able to figure out.

No matter what event I attach to, DataSet.HasChanges doesn't return true until the second ListBox change. I've searched and tried dozens of suggestions, most of them ridiculous, but a few that seemed promising. No luck.

Edit: It isn't the second click that is significant, it is when you click back on the original (edited) item.

From stackoverflow
  • Since asking the question, I've learned a bit more about BindingSources, DataSets and TableAdapters.

    Here is what works:

        private void MyListBox_Click(object sender, EventArgs e)
        {
            this.myBindingSource.EndEdit();
            if (myDataSet.HasChanges())
            {
                if (MessageBox.Show("Save changes?", "Before moving on", MessageBoxButtons.YesNo) == DialogResult.Yes)
                {
                    myTableAdapter.Update(myDataSet.myDataTable);
                }
                else
                {
                    myDataSet.RejectChanges();
                }
            }
        }
    

How can I pre-compress files with mod_deflate in Apache 2.x?

I am serving all content through apache with Content-Encoding: zip but that compresses on the fly. A good amount of my content is static files on the disk. I want to gzip the files beforehand rather than compressing them every time they are requested.

This is something that, I believe, mod_gzip did in Apache 1.x automatically, but just having the file with .gz next to it. That's no longer the case with mod_deflate.

From stackoverflow
  • This functionality was misplaced in mod_gzip anyway. In Apache 2.x, you do that with content negotiation. Specifically, you need to enable MultiViews with the Options directive and you need to specify your encoding types with the AddEncoding directive.

  • You can use mod_cache to proxy local content in memory or on disk. I don't know if this will work as expected with mod_deflate.

  • mod_gzip compressed content on the fly as well. You can pre-compress the files by actually logging into your server, and doing it from shell.

    cd /var/www/.../data/
    for file in *; do
        gzip -c $file > $file.gz;
    done;
    
    Otto : This will remove the original files, which means clients that don't have Aceept-Encoding: gzip won't be serviced.
    Aeon : good point, updated.
    Otto : While you're editing, why not add -9 and get the highest compression possible. My 1500 files compressed in 38 seconds, so it's worth doing to save every byte possible in bandwidth and download time. :) (Also wishing I could edit my typo in my previous comment. Ugh)
    Aristotle Pagaltzis : -9 is the default anyway.
    Otto : Not according to the man page on my Mac, it says -6 is the default.
  • To answer my own question with the really simple line I was missing in my confiuration:

    Options FollowSymLinks MultiViews
    

    I was missing the MultiViews option. It's there in the Ubuntu default web server configuration, so don't be like me and drop it off.

    Also I wrote a quick Rake task to compress all the files.

    namespace :static do
        desc "Gzip compress the static content so Apache doesn't need to do it on-the-fly."
        task :compress do
         puts "Gzipping js, html and css files."
         Dir.glob("#{RAILS_ROOT}/public/**/*.{js,html,css}") do |file|
          system "gzip -c -9 #{file} > #{file}.gz"
         end
        end
    end
    
  • I have an Apache 2 built from source, and I found I had to modify the following in my httpd.conf file:

    Add MultiViews to Options:

    Options Indexes FollowSymLinks MultiViews
    

    Uncomment AddEncoding:

    AddEncoding x-compress .Z
    AddEncoding x-gzip .gz .tgz
    

    Comment AddType:

    #AddType application/x-compress .Z
    #AddType application/x-gzip .gz .tgz
    
  • This is mostly working for me. But if I go to http://ismyblogworking.com/www.whatsthatbug.com to check http compression, there is one problem:

    "# Your blog page content type is application/x-gzip, not HTML or XHTML."

    This is causing a few people to get a download prompt instead of the compressed page. Do I need to use a Content-Type tag or something to fix this?

    EDIT: Nevermind, I think I just wasn't patient enough. It appears to be correct now.

  • I have the same issue in my Ubuntu 9.10 with apache. I have enabled mod_defleat but enable to support html.gz pages to server. I also add the Multiviews option in my virtualhost settings but still it ask me to download the file.

    any one can help me the exact settings

Bouncing Ball in Java

This is probably a really basic problem but I can't seem to find any other articles on it.

Anyway, I have written a small bouncing ball program in Java to try and expand my basic skills. The program is just a simple bouncing ball that will drop and hopefully bounce for a while. The original program worked fine but now I have tried to add gravity into the program. The gravity actually works fine for a while but then once the bounces get really small the animation becomes erratic for a very short time then the position of the ball just constantly decreases. I've tried to figure out the problem but I just can't see it. Any help would be most welcome.

public final class Ball extends Rectangle {
float xspeed = 1.0f; float yspeed = 1.0f; float gravity = 0.4f;


public Ball(float x, float y, float width, float height) {
 super(x, y, width, height);
}

public void update(){
 yspeed += gravity;

 move(xspeed, yspeed);

 if(getX() < 0){
  xspeed = 1;
 }
 if(getX() + getWidth() > 320){
  xspeed = -1;
 }
 if(getY() < 0){
  yspeed = 1;
 }
 if(getY() + getHeight() > 200 && yspeed > 0){
  yspeed *= -0.98f;
 }
 if(getY() + getHeight() > 200 && yspeed < 0){
  yspeed *= 0.98f;
 }

}

public void move(float x, float y){
 this.setX(getX()+x);
 this.setY(getY()+y);
}

}

EDIT: Thanks that seems to have sorted the erratic movement. I'm still struggling to see how I can stop my ball moving down when it has stopped bouncing. Right now it will move stop bouncing then continue moving down passed the "floor". I think it's to do with my yspeed += gravity line. I just can't see how I'd go about stopping the movement down.

From stackoverflow
  • When you do

    yspeed += gravity;
    

    you are assuming that the ball has space move through a distance dx = v_i * t + 1/2 (-g) t^2. When you are very near the floor this may not be true. It fail if:

    • You are near enough the the floor and moving down
    • You are very near the floor and have low velocity (like when the ball has lost most of it's energy)

    This bug causes your simulation to stop conserving energy, resulting in the erratic behavior you see at low amplitude.

    You can reduce the problem by using smaller time steps, and you can get rid of it outright if you do test computation to notice when you're out of room and to select a safe time step for that iteration (i.e. always use your default unless there is a problem, then calculate the best time step).

    However, the basic approximation you're using here has other problems as well. Look in any numeric analysis text for a discussion of solving differential equations numerically.

  • I suspect it's because when the ball bounces, it will actually be slightly below the "ground", and at low speeds, it won't move back above the ground in one tick - so the next update() will see it still below the ground, and bounce again - but downwards this time, so the cycle continues.

    You need to move the ball back up to ground level when it bounces, something like this:

        if(getY() + getHeight() > 200){
                yspeed *= -0.981;
                setY(200 - getHeight());
        }
    
    paxdiablo : move seems to take deltas, not absolute values, your ball is likely to fly off the edge of the universe :-)
    Blorgbeard : heh, oops. Fixed.
  • Similar question: How do I apply gravity to my bouncing ball application?

  • First things first: setting y-speed to 1 when you bounce on the top of the window is not correct, you should set yspeed to -yspeed (but if you start within the borders, it should never bounce up to the top anyway).

    Secondly, your multiply by -0.981 when bouncing on the bottom is okay but I'm concerned with the constant 0.4 gravity being added to yspeed every iteration. I think that's what is causing you wiggles at the bottom since you do the move before checking which can result in the ball dropping below ground level.

    I would try ensuring the the y value can never go below ground level by replacing the move with:

    if (getY() + getHeight() + yspeed > 200) {
        move(xspeed, 200 - getY() - getHeight());
    } else {
        move(xspeed, yspeed);
    }
    
  • The problem is that when the bounces get really small, the

    yspeed *= -0.981;
    

    line will get called in short succession. The ball will go below the bottom, start coming back up, but still be below the bottom (because 0.981 < 1.0) eventually, and it will behave eradically. Here's how you fix it:

    if(getY() + getHeight() > 200){
      yspeed *= -0.981;
      setY(400 - getY() - getHeight()); // I believe this is right.
    }
    

    By fixing the position, you won't alternate between decreasing and increasing as quickly and won't get stuck in the situation where it is always decreasing because it is always below the bounds.

    qpingu : 200 - (getY() + getHeight() - 200) = 200 - getY() - getHeight() + 200 = 400 - getY() - getHeight()
  • [EDIT: I think I misunderstood, so this probably isn't much use :) ]

    if(getY() + getHeight() > 200){
      yspeed *= -0.981;
    }
    

    You're negating the vertical velocity on every update. I'd probably try handling gravity in update-sized slices. Assuming you're doing 30 updates per second (for 30fps), maybe something like

    // Define some constants
    SecondsPerUpdate = (1.0f / 30);
    AccelDueToGravity = 0.981;
    
    if(getY() + getHeight() > 200){
      yspeed -= (AccelDueToGravity * SecondsPerUpdate);
    }
    

Regular Expression Compiler

I have had the need to use regular expressions only a few times in the work that I have done; however, in those few times I discovered a very powerful form of expression that would enable me to do some extremely useful things.

The problem is that the language used for regular expressions is wrong - full stop.

It is wrong from a psychological point of view - using disembodied symbols provides a useful reference only to those with an eidetic (photographic) memory. Whilst the syntactic rules are clearly laid out, from my experience and what I have learnt from others, evolving a regular expression that functions successfully can prove to be a difficult thing to do in all but the most trivial situations. This is understandable since it is a symbolic analog for set theory, which is a fairly complicated thing.

One of the things that can prove difficult is dissolving the expression that you are working on into it's discrete parts. Due to the nature of the language, it is possible to read one regular expression in multiple ways if you don't have an understanding of it's primary goal so interpreting other people's regexes is complicated. In natural language study I believe this is called pragmatics.

The question I'd like to ask then is this - is there such a thing as a regular expression compiler? Or can one even be built?

It could be possible to consider regexes, from a metaphorical point of view, as assembly language - there are some similarities. Could a compiler be designed that could turn a more natural language - a higher language - into regular expressions? Then in my code, I could define my regexes using the higher level language in a header file and reference them where necessary using a symbolic reference. I and others could refer from my code to the header file and more easily appreciate what I am trying to achieve with my regexes.

I know it can be done from a logical point of view otherwise computers wouldn't be possible but if you have read this far (which is unlikely :) then would you consider investing the time in realising it?

From stackoverflow
  • I never stumbled across something like that. And I don't think that something like that would be useful.

    That higher-level language would be very verbose and my guess is that you'd need pretty long statements to come up with a regular expression of average complexity.

    Maybe you just haven't been using regular expressions often enough. Believe me, my memory is far from being eidetic (or even good), but I rarely have problems crafting regular expressions or understanding those of my coworkers.

  • One way you can by pass this problem is by using programs like QuickREx, it shows how regex works on multiple test data(with highlights). You could save text data in file near your regex and latter when you want to change it, understand it or fix it that would be much easier.

  • Have you considered using a parser generator (aka compiler compiler) such as ANTLR?

    ANTLR also has some kind of IDE (ANTLR Works) where you can visualize/debug parsers.

    On the other hand a parser generator is not something to throw into you app in a few seconds like a regex - and it also would be total overkill for something like checking email address format.

    Also for simple situations this would be total overkill and maybe a better way is just to write comments for your regex explaining what it does.

  • What about write them with Regex Buddy and paste the description it generates as comment on your code?

    Michael Haren : +1: regex is extremely hard to read, but this is a tooling issue, not a language issue
  • 1) Perl permits the /x switch on regular expressions to enable comments and whitespace to be included inside the regex itself. This makes it possible to spread a complex regex over several lines, using indentation to indicate block structure.

    2) If you don't like the line-noise-resembling symbols themselves, it's not too hard to write your own functions that build regular expressions. E.g. in Perl:

    sub at_start { '^'; }
    sub at_end { '$'; }
    sub any { "."; }
    sub zero_or_more { "(?:$_[0])*"; }
    sub one_or_more { "(?:$_[0])+"; }
    sub optional { "(?:$_[0])?"; }
    sub remember { "($_[0])"; }
    sub one_of { "(?:" . join("|", @_) . ")"; }
    sub in_charset { "[^$_[0]]"; }       # I know it's broken for ']'...
    sub not_in_charset { "[^$_[0]]"; }   # I know it's broken for ']'...
    

    Then e.g. a regex to match a quoted string (/^"(?:[^\\"]|\\.)*"/) becomes:

    at_start .
    '"' .
    zero_or_more(
        one_of(
            not_in_charset('\\\\"'),    # Yuck, 2 levels of escaping required
            '\\\\' . any
        )
    ) .
    '"'
    

    Using this "string-building functions" strategy lends itself to expressing useful building blocks as functions (e.g. the above regex could be stored in a function called quoted_string(), you might have other functions for reliably matching any numeric value, an email address, etc.).

  • There are ways to make REs in their usual form more readable (such as the perl /x syntax), and several much wordier languages for expressing them. See:

    I note, however, that a lot of old hands don't seem to like them.

    There is no fundamental reason you couldn't write a compiler for a wordy RE language targeting a compact one, but I don't see any great advantage in it. If you like the wordy form, just use it.

  • Regular Expressions (well, "real" regular expressions, none of that modern stuff;) are finite state machines. Therefore, you create a syntax that describes a regular expressions in terms of states, edges, input and possibly output labels. The fsmtools of AT&T support something like that, but they are far from a tool ready for everyday use.

    The language in XFST, the Xerox finite state toolkit, is also more verbose.

    Apart from that, I'd say that if your regular expression becomes too complex, you should move on to something with more expressive power.

  • XML Schema's "content model" is an example of what you want.

    c(a|d)+r
    

    can be expressed as a content model in XML Schema as:

    <sequence>
     <element name="c" type="xs:string"/>
     <choice minOccurs="1" maxOccurs="unbounded">
      <element name="a" type="xs:string"/>
      <element name="d" type="xs:string"/>     
     </choice>
     <element name="r" type="xs:string"/>
    <sequence>
    

    Relax NG has another way to express the same idea. It doesn't have to be an XML format itself (Relax NG also has an equivalent non-XML syntax).

    The readability of regex is lowered by all the escaping necessary, and a format like the above reduces the need for that. Regex readability is also lowered when the regex becomes complex, because there is no systematic way to compose larger regular expressions from smaller ones (though you can concatenate strings). Modularity usually helps. But for me, the shorter syntax is tremendously easier to read (I often convert XML Schema content models into regex to help me work with them).

  • I agree that the line-noise syntax of regexps is a big problem, and frankly I don't understand why so many people accept or defend it, it's not human-readable.

    Something you don't mention in your post, but which is almost as bad, is that nearly every language, editor, or tool has its own variation on regexp syntax. Some of them support POSIX syntax as it was defined so many years ago, some support Perl syntax as it is today. But many have their own independent ways of expressing things, or which characters are "special" (special characters is another topic) and which are not. What is escaped and what isn't. Etc. Not only is it difficult to read a regexp written for one language or tool, but even if you totally memorize the syntax rules for your favorite variation, they can trip you up in a different language, where {2,3} no longer means what you expect. It's truly a mess.

    Furthermore, I think there are many non-programmers who (if they knew it existed) would appreciate having a pattern-matching language they could use in everyday tools like Google or Microsoft Word. But there would need to be an easier syntax for it.

    So, to answer your question, I have often thought of making some kind of cross-platform, cross-language, cross-everything library that would allow you to "translate" from any regexp syntax (be it Perl, or POSIX, or Emacs, etc) into any other regexp syntax. So that you wouldn't have to worry if Python regexps could do negative look-behind, or if character-class brackets should be escaped in an Emacs regexp. You could just memorize one syntax, then make a function call to get out the equivalent syntax for whatever you happened to be using.

    From there, it could be extended with a new pattern-matching language, that would be a bit more verbose or at least more mnemonic. Something for people who don't want to spend half-an-hour studying a regexp to figure out what it does. (And people who think regexps are fine as they are have obviously never had to maintain anything they didn't write themselves, or they would understand the need for other people to be able to parse what they've written.)

    Will I ever attempt such a beast? I don't know, it's been on my to-do list for a long time, and there are a lot of easier and more entertaining projects on there as well. But if you are contemplating something similar, let me know.

  • regular expression compiler:

    ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/83/972/CS-TR-83-972.pdf

Session isn't passing over domain

In my rails app, when I log in at the www.site.com address, I am logged in just fine. Although without logging out, I go to the site, http://site.com I am logged out, but still logged in at the www.site.com address if I go back.

I can't find anything to set in my environment variables, any idea as to how to keep this session across all domains on my domain?

From stackoverflow
  • Set the session cookie properly; that is, for .site.com and site.com rather than just for www.site.com.

    Garrett : I don't know what you mean by setting the session cookie properly. I don't ever remember setting it. I just use session[:user_id] and move along.
  • since they alias www. to .; couldn't you just prepend www. onto the .?

  • You should redirect www.site.com to site.com (or the other way around). If you don't do that, google may think it's two different sites.

    Matt : +1 whilst this doesn't answer the question, it is certainly something you should implement.
  • When you set a session cookie for "site.com", that will be different than "www.site.com." You need to specify the "cookie_domain" as ".site.com" which will set the cookie or all subdomains as well. In PHP, you could use ini_set or session_set_cookie_params to set session.cookie_domain. In Rails, you can either add a small script to the enviroment.rb - something like:

    ActionController::Base.session_options[:session_domain] = '.site.com'

    (in this case you might also do some switching based on the domain name in production/test/development env's) or try some other configuration options.

    Here's more than you'd ever want to know on the subject.

    Garrett : Following what you said worked. :-)
  • In rails 2.3 this has been changed to:

    config.action_controller.session[:domain] = '.example.com'
    

    or if the session variable hasn't been created yet

    config.action_controller.session = {:domain => '.example.com'}
    

    See http://stackoverflow.com/questions/663893/losing-session-in-rails-2-3-2-app-using-subdomain/978716

Pros/Cons of Binary Reference VS WCF

I am in the process of implementing an enhancement to an existing web application(A). The new solution will provide features(charts/images/data) to the application A. The new enhancement will be a new project and will generate new assemblies. I am trying to identify what would be most elegant way to read this information. 1) Do a binary reference and read the data directly. The new assemblies live with your application and are married together 2) Write a WCF call and get the data. This will help to decouple the application.

The new application will involve me to buy some expensive licences. So if i go with the 2nd option i can limit the license fee to a single server or atmost 2-3. My current applicaiton runs under a webfarm of 8 servers.

Please share out the pros/cons of both approach.

Thanks.

From stackoverflow
  • If you decouple the two pieces sufficiently, you will also permit the use of clients running something other than .NET. Using the first option, you could only support .NET clients. This may turn out to be important, even if today you are absolutely certain that only .NET will ever be used - tomorrow, your company may be purchased by another which is a Java or PHP shop.

    Even if you never need to support a non .NET client, coupling to the assemblies will require you to maintain version compatibility between the client and server. If this is not necessary, then use option #2.

  • The benefit of using WCF (decoupled approach) is that you get a deployment option to take it outside of the machine if it impacts the machine too much in terms of processing or storage.

    The downside is that you'll likely pay some performance hit compared to linking directly.

    I'm sure you can do some dynamic linking so you don't have to deploy to all 8 servers.

Building a large form, need advice

I have to build a large form for users to fill out in order to apply for graduate study at the college I work for. There will be a large amount of information to collect (multiple addresses, personal information, business information, past school information, experience, etc...) and I want to know the best way to handle all this. I'm going to be using PHP and Javascript.

Are there any helpers or pieces of frameworks that I can use to help with the building/validation of the form, something I can just pop into my existing project?

Also would like any advice as far as keeping track of a large form and the resulting data.

From stackoverflow
  • You need to use multiple pages, and you need to include a mechanism whereby users can leave, and come back and fill out the rest of the form later (or if they're accidentally disconnected). Otherwise you're going to have all sorts of user issues, not due to your service, but because they're using computers and internet connections that are flaky, etc.

    Survey software is probably a reasonable approximation of what you're doing, and there are survey packages for most PHP CMS's. Are you building this from scratch, or do you have an existing CMS underneath?

  • A List Apart have an article on building sensible forms that is a good read

    Why does the form need to be large on the first instance? Can't you trim it down to the bare essentials for the account and provide a way for them to come back later to flesh out the rest of the details?

    For form validation, pop a gander on the jQuery validation plugin, Validation

  • A few tips, without knowing all the specifics of your form:

    Don't show the user everything at once - this can be accomplished by multiple pages, or by selectively showing/hiding elements on the form as the user progresses through it. Provide contextual navigation that says "You're on step 3 of 10" so the user can get a sense of where they are in the form and how much effort is required to finish it.

    Providing a mechanism to save and return later is a fantastic idea. If possible, provide a link to an email account of their choosing - you want to make this component as easy to use as possible, and requiring them to fill out an additional username/password to retrieve their data is just another barrier to completion.

    Only ask for what you absolutely need. Yes, you're going to have to fight some political battles here - everyone wants as much as they can get. One way to combat this (especially effective when you have pressure from multiple groups) is to build out some prototypes: 1 with EVERYTHING and one with several sections reduced or removed. Have stakeholders from each group fill out both of them and measure their time to completion or roll-throughput yield. When you've got completion data, and they realize how much every other group is asking for (in addition to their group) they are easier to work with. In short, remove as much as possible - let the user go back later to provide more details if they wish.

    Write down all your inputs on index cards and see how they logically fit together. More often than not you will find more efficient groupings or orderings. More than likely you will come up with much more usable ideas. This is extremely important when converting paper forms to online forms. Usability.gov has a fantastic case study on this topic.

  • Well I agree with Adam but I have some advise for you.

    If I were you, I would create some virtual hidden tabs instaed of multiple forms with a next button. You can create some which can control by javascript. First show the first one which will collect personal information like Name,Birthday,email, and etc... . Once user filled them out and clicked on next button,hid this and show the other which will ask for other information like address and so on.

    Once the whole dive compeleted, at the last div put a submit button which will submite the whole information to the server at once.

    By why do so?

    1. User will not get shocked becuase will not see a long form at each time and will fill out with patient.

    2. You hit server at once;usually universtites and college's servers are too busy, you better design a form which hit the server least. This could count as performance tip.

    3. Since you will submit the whole data at once, you would not worry about the issue that user will continue to fill out the other pages or not,so you will use less session which still will count as a performance tip.

    4. This way makes your form more interesting and you can called you did something like Ajax.

  • You can add Javascript form validation to make it more user-friendly, but one thing you should never skimp on is the server-side validation... which has historically been awful in PHP.

    One thing that'll make your life a million times easier here is the filter library, especially filter_input_array() since you can build the input validation programmatically instead of having to copy and paste a lot of checks. It takes some getting used to, but it's much, much better than the old way of doing things.

Issues with HTTP Compression?

We are investigating the use of HTTP Compression on an application being served up by JBoss. After making the setting change in the Tomcat SAR, we are seeing a compression of about 80% - this is obviously great, however I want to be cautious... before implementing this system wide, has anyone out there encountered issues using HTTP Compression?

A couple points to note for my situation.

  • We have full control over browser - so the whole company uses IE6/7
  • The app is internal only
  • During load testing, our app server was under relatively small load - the DB was our bottleneck
  • We have control over client machines and they all get a spec check (decent processor/2GB RAM)

Any experiences with this would be much appreciated!

From stackoverflow
  • Compression is not considered exotic or bleeding edge and (fwiw) I haven't heard of or run into any issues with it.

  • Compression on the fly can increase CPU load on the server. If at all possible pre-compressing static resources and caching compressed dynamic responses can combat that.

  • As long as you respect the client's Accept-Encoding header properly (i.e. don't serve compressed files to clients that can't decompress them), you shouldn't have a problem.

    Oh, and remember that deflate is faster than gzip.

  • It just a really good idea all the way around. It will add slight CPU load to your server, but that's usually not your bottleneck. It will make your pages load faster, and you'll use less bandwidth.

AS400 Data Connection in ASP.NET

I have an application that will reside within a business2business network that will communicate with our AS400 in our internal network environment. The firewall has been configured to allow the data request through to our AS400, but we are seeing a huge lag time in connection speed and response time. For example what takes less than a half second in our local development environments is taking upwards of 120 seconds in our B2B environment.

This is the function that we are utilizing to get our data. We are using the enterprise library application blocks, so the ASI object is the Database...

/// <summary>
/// Generic function to retrieve data table from AS400
/// </summary>
/// <param name="sql">SQL String</param>
/// <returns></returns>
private DataTable GetASIDataTable(string sql)
{
    DataTable tbl = null;

    HttpContext.Current.Trace.Warn("GetASIDataTable(" + sql + ") BEGIN");
    using (var cmd = ASI.GetSqlStringCommand(sql))
    {
        using (var ds = ASI.ExecuteDataSet(cmd))
        {
            if (ds.Tables.Count > 0) tbl = ds.Tables[0];
        }
    }
    HttpContext.Current.Trace.Warn("GetASIDataTable() END");
    return tbl;
}

I am trying to brainstorm some ideas to consider as to why this is occurring.

From stackoverflow
  • Sorry but I can't tell you what is going on but I just have a couple comments... First I would output the sql, see if it has a lot of joins and/or is hitting a table (file) with a large amount of records. If you really want to dig in fire up your profiler of choice (I use Ants Profiler) and try to find a profiler for the 400 - see what the server resources are as well as actual query after it goes thru the odbc driver.

    I have worked with asp.net and as400 a few times and the way I have been most successful is actually using sql server with a linked server to AS400. I created a view to make it simpler to work with - hiding the oddities of as400 naming. It worked well in my scenario because the application needed to pull information from sql server anyway.

    I thought I would mention it in case it helps... best of luck

    RSolberg : The selects are like this: select a, b, c where x = y... Nothing special. Much appreciated though.
  • Have never used ASP.NET or AS400 in anger, but I have seen this kind of behaviour before and it usually indicated some kind of network problem, typically a reverse DNS lookup that is timing out.

    Assuming you have ping enabled through your firewall, check that you can ping in both directions.

    Also run traceroute from each machine to try and diagnose where a delay might be.

    Hope that helps.

    RSolberg : @brindy - Thanks! I'll get on this tomorrow.
  • Check the size of your iSeries system as well. Depending on the size of the query and if the system is undersized for the applications running on it, this may take time. While it shouldn't be thrown out as a posibility, I have seen a similar behavior in the past. But of course more likely is a network issue.

    The other idea if you can solve the speed issue or is a sizing problem is to store it in an MS SQL Server then write the records from SQL Server to the iSeries from there.

How to make a return type for a result set in LINQ

I am having a problem determining how c# and LINQ solve the common problem of handling a data structure that does not necessarily return a table structure, but instead a resultset.

I have a stored procedure that works, and have included it in my DBML

[Function(Name="dbo.p_GetObject")]
public int p_GetObject([Parameter(Name="ObjectType", DbType="NVarChar(200)")] string objectType, [Parameter(Name="ItemState", DbType="Bit")] System.Nullable<bool> itemState, [Parameter(Name="IsPublished", DbType="Bit")] System.Nullable<bool> isPublished)
{
    IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), objectType, itemState, isPublished);
    return ((int)(result.ReturnValue));
}

The dbml says that the return type is (None) and this could be the crux issue. However I don't have a DBML object that matches the resultset.

The SP takes three parameters, and returns a result set with three columns (ID, Name, Value) with multple rows. I can create a data object for this, and call it resultSet

When I write a function call for this, I get stuck:

public List<resultset> GetObject(string objectType, bool itemState, bool isPublished)
{
    MyDataContext.p_GetObject(objectType, itemState, isPublished);
}

My questions are:

how do I have the data context call to the stored procedure populate my resultSet object? Is there a better approach? What should the return type be? A SQL view? Looking for good suggestions...

From stackoverflow
  • If it simply isn't understanding your SP, that could be the SET FMT_ONLY issue... try generating the data from a simplified version of the SP?

    Normally, SPs / UDFs that don't map 1:1 with an existing entity would expose themselves in a generated type. You can rename this in the DBML file (not in the designer), but personally I wouldn't; I tend to mark the SP as private, and write my own method that projects into my own POCO type (defined for the repository):

    var typed = from row in cxt.SomeFunction(123)
                select new MyType {Id = row.Id, Name = row.Name, ...}
    

    The reason for this is partly for repository purity, and partly to guard against the designer's habit of re-writing the DBML in unexpected ways ;-p See here for more.

    Ash Machine : Just to be clear, the SP is called from the app and works fine, I just don't know the best approach to return it's results into a POCO. I will look into generated types as you mention. Any specific hints on your method above would be helpful. Thanks Marc.
    Marc Gravell : I'm not sure what I can add - either rename the type in the WSDL, or create your own type and use "select" as above... if you clarify the ambiguous area, I can add more detail...
    Ash Machine : Marc, I figured out the problem. My stored procedure was using dynamic SQL, where the SQL is formed based on input parameters and then the SQL string is executed. Such a proc does not Autogenerate a type in the DBML designer, so like you suggested, I had to write my own ISingleResult.

Hibernate: Difference between session.get and session.load

From the API, I could see it has something to do with proxy. But I couldn't find a lot of information on proxy and do not understand the difference between calling session.get and session.load. Could someone please explain or direct me to a reference page?

Thank you!!

From stackoverflow
  • From the Hibernate forum:

    This from the book Hibernate in Action. Good one read this..


    Retrieving objects by identifier The following Hibernate code snippet retrieves a User object from the database:

    User user = (User) session.get(User.class, userID);
    

    The get() method is special because the identifier uniquely identifies a single instance of a class. Hence it’s common for applications to use the identifier as a convenient handle to a persistent object. Retrieval by identifier can use the cache when retrieving an object, avoiding a database hit if the object is already cached. Hibernate also provides a load() method:

    User user = (User) session.load(User.class, userID);
    

    The load() method is older; get() was added to Hibernate’s API due to user request. The difference is trivial:

    If load() can’t find the object in the cache or database, an exception is thrown. The load() method never returns null. The get() method returns null if the object can’t be found.

    The load() method may return a proxy instead of a real persistent instance. A proxy is a placeholder that triggers the loading of the real object when it’s accessed for the first time; we discuss proxies later in this section. On the other hand, get() never returns a proxy. Choosing between get() and load() is easy: If you’re certain the persistent object exists, and nonexistence would be considered exceptional, load() is a good option. If you aren’t certain there is a persistent instance with the given identifier, use get() and test the return value to see if it’s null. Using load() has a further implication: The application may retrieve a valid reference (a proxy) to a persistent instance without hitting the database to retrieve its persistent state. So load() might not throw an exception when it doesn’t find the persistent object in the cache or database; the exception would be thrown later, when the proxy is accessed. Of course, retrieving an object by identifier isn’t as flexible as using arbitrary queries.

    Kent Boogaart : I am debugging an issue right now where session.Get() is returning a proxy!
    Chris : Thanks a lot! The money part for me was: "If load() can’t find the object in the cache or database, an exception is thrown. The get() method returns null if the object can’t be found."
  • Well, in nhibernate at least, session.Get(id) will load the object from the database, while session.Load(id) only creates a proxy object to it without leaving your server that works just like every other lazy-loaded property in your POCOs (or POJOs :). You can then use this proxy as reference to the object itself to create relationships, etc.

    Think of it like having an object that only keeps the Id and that will load the rest if you ever need it. If you're just passing it arround to create relationships (like FKs), the id is all you'll ever need.

  • AOP ............

    u should learn AOP

    Don Branson : You should learn the ways of our people.

Finding the bottleneck in a OpenGL application on the iphone.

I'm trying to track down the bottleneck in an IPhone OpenGL game I'm writing. The game itself is in 2d, and contains a couple hundred textured sprites, each alpha blended.

In terms of textures, I only use a single 512x512 atlas that I bind once, so I don't think its a bandwidth issue (at least not from the upload of textures).

I used instruments to track the CPU usage, memory usage and OpenGL ES usage. At a heavy point in the game, I was getting seeing the following:

FPS: 20 CPU: 60% Real Mem: 17Mb Tiler utilisation : 21% Rendered utilisation : 45%

I'm a little confused as to what could be the bottleneck? How high can the CPU usage go (I'm aware that there are other applications running at the same time) before it becomes the bottleneck? Does 60% sound about right?

Or could it be the amount of graphics? I'm guessing not if the tiler + renderer utilisation are that low down. But I must confess I'm not expert in reading these results.

Any pointers on either what my likely bottleneck is, or where else to look for one would be gratefully received!

From stackoverflow
  • The CPU sounds a bit high, are you using a lot of trigonometric functions like sin/cos? Sometimes those are used badly, ie for particle-systems. Try to keep them to a minimum or if possible use a lookup-table instead.

    Here is a good discussion for approximations

  • Have you checked to see which lines of your code are running the majority of the time? I'm pretty sure there is a tool for this in Instruments.

    strager : Called a 'profiler.'
    samoz : Yes, I just wasn't sure what it is in Instruments.
    epatel : Shark is the tool to use for mac/iphone, a runtime sampler...but remember to start and stop it only when the app is running on the iphone, well...what I have heard.
    epatel : http://developer.apple.com/tools/shark_optimize.html
  • While your 'rendered' utilisation is low I'm assuming this is the CPU side of the OpenGL calls. You can think of this part as building up a list of things for the GPU to do when you glFlush/glFinish. Once you call one of these the GPU actually has to do the things you set up.

    While I haven't worked on the iPhone it sounds like you are GPU bound. So here is a list of things I would check (having not worked directly on the iPhone) in order of how long they would take to investigate:

    1. Are you calling glFinish? If so try using glFlush - glFinish blocks while the GPU does its thing.

    2. If you disable alpha blending do you see an appreciable speed up? I don't think this one is very likely, but it should be quick to check.

    3. Are you doing any sort of culling on your objects, or are you rendering all of your objects each frame?

    Does the iPhone have any sort of GPU profiler?

  • If you haven't already, read through Optimizing OpenGL ES for iPhone OS, it's got a lot of useful tidbits on how to write speedy OpenGL applications on the iPhone.

  • How have you implemented your 2D game? Is each individual sprite rendered onto its own triangle paired polygon? or do you render each sprite onto the 512x512 texture surface?

    For the latter method, altering the texture for each frame can induce a large performance penalty and isn't recommended in the documentation; Which is a little bit of pain really, glDrawTex may provide a work around, but I haven't tested this and I'm not confident it'll work.

Ignoring ctrl-c

I'm trying to write a shell and I'm at the point where I want to ignore ctrl-c.

I currently have my program ignoring SIGINT and printing a new line when the signal comes, but how can I prevent the ^C from being printed?

When pressing ctrl-c, here is what I get:

myshell>^C
myshell>^C
myshell>^C

but I want:

myshell>
myshell>
myshell>

Here is my code relevant to ctrl-c:

extern "C" void disp( int sig )
{
    printf("\n");
}

main()
{
    sigset( SIGINT, disp );
    while(1)
    {
        Command::_currentCommand.prompt();
        yyparse();
    }
}
From stackoverflow
  • Try printing the backspace character, aka \b ?

  • It's the terminal that does echo that thing. You have to tell it to stop doing that. My manpage of stty says

    * [-]ctlecho
           echo control characters in hat notation (`^c')
    

    running strace stty ctlecho shows

    ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
    ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig icanon echo ...}) = 0
    ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
    

    So running ioctl with the right parameters could switch that control echo off. Look into man termios for a convenient interface to those. It's easy to use them

    #include <termios.h>
    #include <unistd.h>
    #include <stdio.h>
    
    void setup_term(void) {
        struct termios t;
        tcgetattr(0, &t);
        t.c_lflag &= ~ECHOCTL;
        tcsetattr(0, TCSANOW, &t);
    }
    
    int main() {
        setup_term();
        getchar();
    }
    

    Alternatively, you can consider using GNU readline to read a line of input. As far as i know, it has options to stop the terminal doing that sort of stuff.

Markup Extensions in WPF/Silverlight

Has anyone ever created a custom markup extension in WPF or Silverlight? When would you ever want or need to do this? Any tips or sources on how to do it?

From stackoverflow
  • Yes it is handy and I have created one myself. I created a markup extension called EvalBinding that takes a set of bindings as children and a C# evaluation string. It evaluates the C# to process the values from the child bindings so that I do not need to create many simple TypeConverter classes.

    For example I can do this...

    <EvalBinding Eval="(this[0] > this[1] ? 'GT' : 'LTE')">
        <Binding ElementName="element1" Path="Size"/>
        <Binding ElementName="element2" Path="Size"/>
    <EvalBinding>
    

    Where this is a reference to the array of child binding results.

    For resources on implementing a MarkupExtension...

    MSDN

    Josh Smith Blog Entry

    Rob Relyea Blog Entry

  • Another example would be for Localization

    Note: You can not write custom markup extensions in silverlight.

    Charles Graham : Ah, another in it's many limitations. I can't wait for Mix so I can figure out if they fixed this shit.

Tools for tutorial/cookbook style documentation

Do you use any tool for writing tutorial style, or cookbook style documentation? How do you keep it up to date (e.g. changes in output, changes in signatures, changes in command line parameters,.. )? Right now I do everything manually, and it's a pain.

I know the tools for producing reference docs, like Sandcastle and NDoc. That's not what I'm looking for.

To clarify, my dream tool would allow me to write an actual .cs or .fs file like:

//>tutorialtext The PrintCakeReady function allows you to check whether your cake is ready. For example:

//>startcodesnippet

var cake = new ChocolateCake();

cake.PrintCakeReady();

//>endcodesnippet

//>tutorialtext Which produces:

//>outputprevioussnippet

This .cs or .fs file should be compilable - I can include it in my project so that it remains up to date. On the other hand I should be able to produce documentation from it, i.e. to html, which would include the code fragments in some distinctive style, execute them (like in Snippet Compiler, or Linqpad) and include the results in the appropriate place.

The benefits would be huge: I'd detect many incompatibilities in the documentation, it could be refactored, and if I change the output of some function, the documentation would auto-update.

But maybe there's a far simpler approach that gets me 90% there. How do you do it?

EDIT I've found this paper which elaborates on this idea. However, the associated tools are old (2002) and not updated.

From stackoverflow
  • In F#, the following approach occurs to me (I have not thought through all the implications):

    TutorialText @"Yadda yadda yadda
    Blah blah blah"
    
    let snippet = <@ 
        let cake = new Cake()
        cake.PrintWhenReady()
    @>
    
    DisplayCode snippet
    
    TutorialText @"Blather"
    
    DisplayOutputOfExecuting snippet
    

    Where the idea is that code snippets are quotations of actual code, which you can print or execute, and the execution of the whole program has the effect of printing out the documentation (rather than an external tool walking the .fs file and pulling out comments to make the doc). Basically I am suggesting authoring an internal DSL in F# for authoring documentation, and leveraging the quotation mechanism to deal with the dual nature of snippets (as both text to print and code to execute). I have no idea how well this will actually work.

    Kurt Schelfthout : I've been playing around with this (since apparently there is no tool for .NET that comes even close to what I want). Biggest problem is getting the DisplayCode function to work - a naive ToString on the quotation displays an AST. So I'd need sort of reverse compilation, which is overkill here.
  • Things I know of which are vaguely like what you seem to be looking for (but not for C#):

    Kurt Schelfthout : I can see how Python would have the advantage here. Good tips.
  • For what it's worth, I did that for Java:
    http://www.agical.com/bumblebee/bumblebee_doc.html

    I have no plans to port it to F# (need to learn the language first ;-)), but if you would write a similar tool maybe you could get some inspiration (or the opposite).

    Cheers!

    Kurt Schelfthout : Nice work! I'll mark this as the "closest answer". I'll definitely have a good look at it. If I do write something myself, it's likely to be much much simpler though...

Sun App Server : How to monitor connections in the connection pool? on scheduled frequency?

Anyway know how to do that?

From stackoverflow
  • Use the monitor command.

    You would use:

    --type

    connectorpool

    and

    --interval The interval in seconds before capturing monitoring attributes. If the interval must be greater than 0. The monitoring attributes are displayed on stdout until you type ctrl-c or q. The default value is 30.

Strange Ogre Error and A Non-Existant FIle

I am getting this error, I have no clue where:

OGRE EXCEPTION(2:InvalidParametersException): Header chunck didn't match either endian: Corrupted stream? in Serializer::determineEdianness at f:\codingextra\ogre\shoggoth_vc9\ogre\ogremain\src\ogreserializer.cpp (line 90)

I am using Visual Studio 2008. I tried to gvim the file on the f: drive mentioned, but apparently it doesn't exist? I also tried to cd to the dir and it says it doesnt exist. Any insight?

From stackoverflow
  • You're using a pre-compiled version of Ogre. If you want to debug it, you might want to download the Ogre sources and install them. It's clear, though, that the Serializer class is reading some data that you've given it that it expects to be in a certain format. Specifically, it's looking for a flag in the header that marks whether the data is little- or big-endian. (Least- or most-significant byte first.)

    You could also try catching the exception wherever your code calls Ogre, which will help you narrow down the problem code.

    jimi hendrix : ok, where would i put the source after it is compiled? also, what would be the most effective way to try and catch this error?
    jimi hendrix : s/source/libraries and stuff/
    greyfade : Just build the sources and set your program to link to the libraries *in that directory*. I've given the most effective means to catch it: put a try/catch block wherever you're loading meshes and catch the exception you're getting. It carries most of the information you need.

Hibernate: Is there a way to programatically create new tables that resemble an existing one?

I have a web app that have many tables (each represents a POJO). I wrote mapping files for each class and then use Hibernate's SchemaExport to generate the tables in my database. Now I want to create 2 additional tables for each existing table that was created:

  • User permission table - stores user permissions on the POJO specific to each field
    each column represents a field in the POJO, each row represents an user, each cell will have a value of "read", "write" etc. representing user's permission on the field

  • Data history table - stores all data history with a version number
    this table will have all the columns that the POJO table has, and with 4 additional fields: data object version, transaction GUID (primary key), a datastamp and user performed this transaction.

I would like to be able to access these tables through Hibernate after they are created. So that I can easily add/remove/update entries in them.


My Question

Most of the columns on the additional tables will be the same as the POJO table. So I think it is probably better to somehow reference the POJO table instead of creating brand new tables. This way, if there is a new field added in the POJO table, these tables will have the changes automatically. But I don't seem to know how to do this. I thought maybe there's some way of:

  • create hibernate mapping files that references the POJO table somehow
    e.g. in my new POJOPermission.hbm.xml file, somehow to specify, use the same fields as the POJO table, and add these new fields.
  • write Java code to create tables in Hibernate
    e.g. I can use the java.lang.Class to return me a list of all the fields in the POJO and then iterate through these, somehow set these as column headers of my new table, and somehow call Hibernate to create these tables at runtime

Could someone please tell me how to do either of the above, or have a work around for this issue? I have a feeling that I might be thinking this in the wrong way....

Thank you!!!

[Edit - solution]

I ended up using XSLT to translate the orginal hbm.xml files into new ones, changing the field types etc. And specify the hibernate.cfg.xml file to include the newly generated ones. In the end just run schemaexport to generate all the java files together...

From stackoverflow
  • CREATE TABLE syntax is implementation specific. DB2 has CREATE TABLE LIKE , but you need to parse the catalog tables to retrieve the keys, foreign and domestic, then generate ALTER TABLE... to implement same. Other databases may have similar.

  • Using "select into" you can create a copy of the basic table structure. It only copies columns and data types, not constraints or indexes. If you only want the structure without data, just specify a where clause that returns no records.

    select * into targettable from sourcetable where 1=0
    
  • If you want to access the user permissions and data history through Hibernate then I think you need to approach the problem by thinking about how you would represent these as POJOs. You can then produce mappings for storing them in the database.