Friday, May 6, 2011

Displaying a bitmap on a "BUTTON" class window in WIN32

Edit: I think the WM_CREATE message isn't sent during the creation of child windows (namely my button). So by calling SendMessage during WM_CREATE, I'm sending a message to a window that hasn't been created yet. The solution for now is to call SendMessage() during the WM_SHOWWINDOW message. Do child windows send WM_CREATE messages at creation?

Why isn't the bitmap displaying on the button? The bitmap is 180x180 pixels.

I have a resource file with:

Bit BITMAP bit.bmp

I then create the main window and a child "BUTTON" window with:

HWND b, d;

b = CreateWindow(L"a", NULL, WS_OVERLAPPEDWINDOW, 0, 0, 500, 500, 0, 0, 
                  hInstance, 0);

d = CreateWindow(L"BUTTON", NULL, WS_CHILD | WS_VISIBLE | BS_BITMAP, 
                 10, 10, 180, 180, b, 200, hInstance, 0);

Then, in my windows procedure, I send the "BUTTON" window the "BM_SETIMAGE" message with:

HBITMAP hbit; 

case WM_CREATE:    // It works if I change this to: case WM_SHOWWINDOW 

hbit = LoadBitmap(hInstance, L"Bit");

SendMessage(d, BM_SETIMAGE, (WPARAM)IMAGE_BITMAP, (LPARAM)hbit);

LoadBitmap() is returning a valid handle because It isn't returning NULL, and I'm able to display the bitmap in the client area using the BitBlt() function. So I'm either not sending the message correctly, or I'm not creating the "BUTTON" window correctly.

What am I doing wrong?

Thanks!

From stackoverflow
  • How are you verifying that WM_CREATE isn't getting called? Since BUTTON isn't your window class (but rather defined by the OS) it owns the WndProc for the window, not you - therefore WM_CREATE shouldn't be called for the button in your code, because BUTTON isn't your class.

    If you want to receive messages for the button, you'll have to subclass it, and then provide your own WndProc.

    tyler : What I tried to explain in my edit is that WM_CREATE is only sent to the main window, not to the button. I wasn't saying that WM_CREATE isn't being sent. I thought that maybe my WndProc would recieve WM_CREATE messages during the creation of its child windows. It does after all receive WM_COMMAND messages that were generated from its child button window.
  • The window procedure for for your window class "a" gets called with WM_CREATE when a window of that class is created. This is during your first call to CreateWindow, which is before you create the child BUTTON window. WM_CREATE means "you are being created" - it doesn't mean "a child is being created".

    The solution is to call d = CreateWindow(L"BUTTON"...) in the WM_CREATE handler for class "a":

    case WM_CREATE:
        d = CreateWindow(L"BUTTON", NULL, WS_CHILD | WS_VISIBLE | BS_BITMAP, 
                         10, 10, 180, 180, hwnd, 200, hInstance, 0);
        hbit = LoadBitmap(hInstance, L"Bit");
        SendMessage(d, BM_SETIMAGE, (WPARAM)IMAGE_BITMAP, (LPARAM)hbit);
    
    tyler : Thanks. I thought the same thing but when I tried it I actually get no button at all (not even an outline). The only thing that has worked has been to put the the SendMessage() in WM_SHOWWINDOW.
    RichieHindle : I bet that's because you're calling d = CreateWindow(..., b, ...) rather than d = CreateWindow(..., hwnd, ...) - remember, you're within the first call to CreateWindow, so b hasn't yet been assigned to.
    tyler : Wow, brilliant..that completely makes sense. Thanks man.
  • take a look here:

    http://winapi.foosyerdoos.org.uk/info/user_cntrls.php

UIImageView and UIImage: How can I tweak the most performance out of them?

First: I've implement a fairly complex scrolling mechanism for images, that allows to scroll over a few hundred thousands (theoretically) in one single scroll view. This is done by preloading small portions upon scrolling, while re-using all UIImageViews. Currently all I do is to assign new created UIImage objects to those re-used UIImageViews.

It might be better if it's possible to also re-use those UIImage objects by passing new image data to them.

Now the problem is, that I am currently using the -imageNamed: method. The documentation says, that it caches the image.

Problems I see in this case with -imageNamed: As the image gets scrolled out of the preloading range, it's not needed anymore. It would be bad if it tries to cache thousands of images while the user scrolls and scrolls and scrolls. And if I would find a way to stuff new image data into the UIImage object for re-using it, then what happens with the old image that was cached?

So there is one method left, that seems interesting: -initWithContentsOfFile:

This does not cache the image. And it doesn't use -autorelease, which is good in this case.

Do you think that in this case it would be better to use -initWithContentsOfFile:?

From stackoverflow
  • Only a benchmark can tell you for sure. I'm inclined to think that UIImage image caching is probably extremely efficient, given that it's used virtually everywhere in the OS. That said with the number of images you're displaying, your approach might help.

Which is not a reason to create a custom exception?

Hello,

Recently I took a test at brainbench and got not a bad result (something like 4.5, master degree). I didn't know the answer only on 1 question (the rest I was sure about or at least I thought I knew the correct answer :) ). The question is:

Which one of the following is NOT a reason to create custom exceptions?

Choice 1
To insert a strong label for later inspection
Choice 2
To strongly-type the purpose of a particular exception
Choice 3
To allow for remote serialization
Choice 4
To process common steps when an exception is created
Choice 5
To add custom properties for custom data propagation

I answered "4" - To process common steps when an exception is created. Which one you think is correct?

From stackoverflow
  • Choice 3. The base exception either already supports remoting, or else you deriving from it won't add remoting.


    The "exception" Marc mentions in a comment is as follows; I think it's not what the test writers had in mind:

    In a WCF service, you can allow an unhandled exception to propagate out of the service. WCF will turn it into a SOAP Fault, which may or may not contain details of the unhandled exception, depending on configuration.

    Better, would be to recognize certain sets of exceptions and translate them deliberately into SOAP Faults. For instance, a service that operates on database entities could expect that sometimes an entity would not be found; sometimes an attempt to add a new entity would result in a duplicate; sometimes an update attempt would have resulted in an invalid state. Such a service might decide to expose a NotFoundFault, DuplicateItemFault, and InvalidStateFault.

    The service would define the three faults as Data Contracts to define their contents:

    [DataContract]
    public class FaultBase {
        [DataMember]
        public string ErrorMessage {get;set;}
    }
    
    [DataContract]
    public class NotFoundFault : FaultBase {
        [DataMember]
        public int EntityId {get;set;}
    }
    
    [DataContract]
    public class DuplicateItemFault : FaultBase {
        [DataMember]
        public int EntityId {get;set}
    }
    
    [DataContract]
    public class InvalidStateFault : FaultBase {
    }
    

    You would then indicate that particular operations can return such faults:

    [OperationContract]
    [FaultContract(typeof(NotFoundFault))]
    public Entity GetEntityById(int id)
    

    Finally, you might wrap an exception from the DAL in such a way that WCF will return the particular fault instead:

        try {
            return DAL.GetEntity<Entity>(id);
        }
        catch (DAL.NoSuchEntityException ex)
        {
            throw new FaultException<NotFoundFault>(
                new NotFoundFault {EntityId = ex.Id, ErrorMessage = "Can't find entity"});
        }
    

    I think the test developer was trying to get you to think that something special needs to be done in order for an exception to be serialized for remoting to a different AppDomain. This will not be the case if the custom exception was properly implemented, as the supplied .NET exception classes are all serializable. Thus, the ability to serialize is not an excuse to create a custom exception, as the base class should already be serializable.

    nightcoder : Yes, maybe, I thought about it already before writing the question..
    victor hugo : I vote 3 too! All the other ones are reasons for DO create a custom exception
    Marc Gravell : Re 3 - that may be true for .NET "remoting", but for WCF 3 *is* a reason to replace an exception with a "fault" (a special class of exception) for serialization purposes.
    John Saunders : @Marc: I know that, and you know that, but did the creators of the test know that? I bet not, and the context was more general.
  • I'll actually agree with 4 being the wrong one: To process common steps when an exception is created.

    Exceptions are not supposed to execute "steps" when they're created, but rather inform the system that an exceptional situation has been created for which the current class (and possibly module) doesn't know how to address.

    If executing common steps is necessary for the proper execution of a function, that functionality should be included in the code itself, not separated into an exception.

    Marc Gravell : I'll buy that...
    Dan C. : If you think of cleanup actions as "common steps when an exception is created" and if you do not consider "created" literally, it might make sense. Brainbench tests have sometimes a misleading wording...
  • While using WCF services, we did have to create custom exceptions for serializations. So 3 can't be the correct answer.

    nightcoder : PS. The test was "C# 2.0 Fundamentals" so I think it can't consider WCF aspects.
    Rashmi Pandit : Ok ... the same would apply for web services though

How to add debug assemblies to my Silverlight 2 application?

So I know now that the debug assemblies have been intentionally left out of the Silverlight runtime to save space. For that reason I get good detailed error messages on my local machine that has the Silverlight SDK on it, but I don't on a computer with the runtime only. I get the ubiquitous, "Debugging resource strings are unavailable."

Unfortunately my requirements are a bit unique. I need to include the debug assembly (not sure which one yet) that will give me details of a regular expression error. And so essentially I want to include the dll in the xap if I can.

The problem is that I can't seem to do this. I've tried adding the debug dll's as references and setting them to "copy local." And I've tried adding them into the project as content. But in fact, with either method the xap hardly grows in size and the error message doesn't change.

Any ideas?

From stackoverflow
  • You'll still need the actual Silverlight Developer Runtime to be installed (thus you get the errors etc on the machine you had the SDK installed on). Adding the debug assembly into a production solution and accessing it via the non-developer runtime isn't possible.

    Scott Barnes / Rich Platforms Product Manager / Microsoft.

    Steve Wortham : Thanks Scott. That's what I was beginning to think. I suppose the one other option is to perform a try/catch around the regex, and then if there's an error then send the regex to a web service that then replies with the error details. It's not exactly my idea of the perfect solution, but I may do something like that. If I do all of that asynchronously it should still make for a decently responsive UI experience. By the way, this is all for www.regexhero.com Thanks again, Steve
    Scott : Is this still a limitation with Silverlight 3.0 and Silverlight 4.0? I ask, because it becomes a problem when doing validation as these strings are sometimes used in the ValidationSummary control.
  • So my solution to the problem was essentially to give up on what I was trying to do. Instead, I'm now calling a web service whenever an exception occurs around the regex. That web service has a function I made called GetRegexError.

    Here's the code for it:

    <WebMethod()> _
    Public Function GetRegexError(ByVal strRegex As String, ByVal _regexOptions As RegexOptions) As String
        Try
         Dim _regex As New Regex(strRegex, _regexOptions)
        Catch ex As Exception
         Return ex.Message
        End Try
    
        Return ""
    End Function
    

    This is now implemented in Regex Hero. Thank you Scott for the help.

  • Hi Scot, I'm facing this problem now and if I don't have a solution at server side. Every client'll need to install Silverlight Developer Runtime manually So I think it's not good solution.

    @Steve: If can, please post your example solve this problem here.

    Thanks.

    Steve Wortham : My answer is right above (or below) yours.

CALayer and CGGradientRef anti-aliasing?

Hello all. I'm having an odd issue with CALayer drawing for the iPhone. I have a root layer which adds a bunch of sublayers representing "bubbles". The end result is supposed to look something like this:

http://www.expensivedna.com/IMG_0018.PNG

The problem is that I can't seem to get the layer to anti-alias (notice the jaggies on the bubbles). My code overwriting drawInContext for the bubble CALayer is as follows:

- (void)drawInContext:(CGContextRef)theContext{
CGContextSetAllowsAntialiasing(theContext, true);
CGContextSetShouldAntialias(theContext, true);

size_t num_locations = 2;
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 1.0, 1.0, 1.0, 0.5,  // Start color
        1.0, 1.0, 1.0, 0.0 }; // End color
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef glossGradient = 
CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);

CGPoint topCenter = CGPointMake(25, 20);
CGPoint midCenter = CGPointMake(25, 25);
CGContextDrawRadialGradient(theContext, glossGradient, midCenter, 20, topCenter, 10, kCGGradientDrawsAfterEndLocation);

}

Now the really odd thing is that if I slightly alter the drawing code to only draw normal red circles as follows:

- (void)drawInContext:(CGContextRef)theContext{
CGContextSetAllowsAntialiasing(theContext, true);
CGContextSetShouldAntialias(theContext, true);

CGContextSetFillColorWithColor(theContext, [UIColor redColor].CGColor);
CGContextFillEllipseInRect(theContext, CGRectMake(0, 0, 40,40));
}

Everything seems to antialias OK:

http://www.expensivedna.com/IMG_0017.PNG

I can't seem to figure out this seemingly odd behavior. Am I missing some difference between antialiasing gradients and normal circles?

Thanks guys.

From stackoverflow
  • maybe try dropping out the kCGGradientDrawsAfterEndLocation? It might be doing something weird with the alpha

  • Jeff - make topCenter a bit further out and use CGContextClipToMask with a circular mask.

    Edit: Actually a much much better way to do it is to use a vector clipping path using CGContextClip.

    Edit 2: Sample code:

    CGContextAddEllipseInRect(theContext, CGRectMake(20, 20, 10, 10));
    CGContextClip(theContext);
    

    Add this before you draw your gradient, and draw it a bit further out.

  • Thanks guys. So it turns out that the answer is a combination of your two answers (coob and Alex). Basically it seems like the CGDrawRadialGradient function only has aliasing at the starting circle, not the ending one. Since I want aliased "edges" on both, I first set the function to draw from the inside out which takes care of the first "edge", but produces the following:

    Step 1

    Then, I clip the image as suggested by coob and that gives a nice aliasing around the final edge of the bubble:

    Step 2

    Looks good enough for me!

    - (void)drawInContext:(CGContextRef)theContext{
    CGContextSetAllowsAntialiasing(theContext, true);
    CGContextSetShouldAntialias(theContext, true);
    
    size_t num_locations = 2;
    CGFloat locations[2] = { 0.0, 1.0 };
    CGFloat components[8] = { 1.0, 1.0, 1.0, 0.0,  // Start color
         1.0, 1.0, 1.0, 0.5 }; // End color
    CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
    CGGradientRef glossGradient = 
    CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
    
    CGContextAddEllipseInRect(theContext, CGRectMake(5, 5, 40, 40));
    CGContextClip(theContext);
    
    CGPoint topCenter = CGPointMake(25, 20);
    CGPoint midCenter = CGPointMake(25, 25);
    CGContextDrawRadialGradient(theContext, glossGradient, topCenter, 10, midCenter, 20, kCGGradientDrawsAfterEndLocation);
    

    }

how to protect the ws discovery ad hoc network from man-in-the-middle attacks

the ws-discovery specifications explains how to protect your network from

  1. message alteration
  2. Denial of service
  3. replay
  4. spoofing

but what about man-in-the-middle attack?

From stackoverflow
  • As far as I understand, The "message alteration" mitigation, that is signing the messages, is protecting the interaction from man-in-the-middle attack. If you can verify the source of the message and it authenticity by the sender unique signature, then any one trying to pretend to be legitimate sender wan't be able to do so.

    Disclaimer: I am not security expert.

  • The idea behind a Man in the Middle Attack(Wikipedia.org), is that your network is compromised and the attacker can intercept, view, and modify traffic between all members. The most basic step towards preventing this is to encrypt the network with WPA (at the minimum) and keep the access points locked down. Your goal should be to first prevent an attacker from getting into the network. The second layer of defense you could employ is to use some form of encryption for all the traffic between parties on the network (perhaps something other than public/private) so even if the network is compromised, the traffic will still not be intelligible to the attacker.

    Disclaimer: I am also not a security expert.

  • ws security secures that when you sign the message, as it uses the private key to encrypt and then the reciept decrypt using the public key; so a man in the middle wont be able to interfere.

Fogbugz like dropdown menu

I am searching for a jquery plugin to get a fogbugz like dropdown menu so that you can type in the dropdown menu.

This is what it should look like: http://dl.getdropbox.com/u/5910/Jing/2009-05-10_0937.swf ( the old video http://dl.getdropbox.com/u/5910/Jing/2009-05-10_0055.swf )

All the answers have been not giving me the typing part any other suggestions?

EDIT: I found something a bit like it: http://phone.witamean.net/sexy-combo/examples/index.html but there are a couple things that bug me: it doesn't select the whole string we clicking in the textbox and the dropbox doesn't show all options when clicking on the triangle

I found a working demo but its from ext :( http://extjs.com/deploy/dev/examples/form/combos.html not jquery... but it has all the features I neet

From stackoverflow
  • I haven't found one yet that has both auto-complete and the combo drop down arrow. However, this plug-in is close.

  • That's usually called a combobox, you can find a nice jQuery one here:

    http://jonathan.tang.name/code/jquery_combobox

    Thomaschaaf : but it doesn't have the writing part
    James Avery : changed the link to a better one (based on jQuery UI too)
    Thomaschaaf : Thanks a bunch I wanted to start writing my own tomorrow now I don't have to :)

IIS5 not serving index.html on local machine even though it's listed in the Default Documents

I'm developing a site in VS2008 on a machine running XP SP3 with IIS5. I've named the main page in each directory index.html to avoid the www.domain.tld/directory/pagename.ext scenario of specifying a full path, and also because these pages literally contain an index of the other pages in their directory.

When I debug on my local machine I get the dreaded "Directory Listing" page instead. I have confirmed that index.html is listed in the IIS Default Documents, and I've also tried moving it into the first position. No change. Uploading the site to a server running IIS7 produces the expected and desired results.

Is the problem because I have an older version of IIS? Is there a difference in how IIS operates when running locally instead of on a web server? Do I need to change a setting in web.config? Any thoughts will be appreciated.

From stackoverflow
  • Did you make sure Enable Default Document is checked? IIS 5 ignores any webserver settings in your web.config so that shouldn't have anything to do with it.

    Bryan : Yeah, I made sure it was checked. The Default Document settings are disabled when it's not checked, so if it hadn't been, I wouldn't have been able to move index.html to the first position. Thanks for the suggestion anyway.

Open Office Base Datetime Default

Can anyone tell me how to set the default value on a date field in Open Office Base, in the same way that GetDate() works for SQL Server?

Cheers OneSHOT

From stackoverflow
  • Give CURDATE() a try.

    Found at http://wiki.services.openoffice.org/wiki/Built-in_functions_and_Stored_Procedures

    OneSHOT : +1 for being the only answer :-) but I no longer have access to the setup to test this. The project got canned and so did the dev machine!
    OneSHOT : I've created a test db just so i could close this off and it seems CURDATE() works! Cheers Richard.
    Richard West : Glad I could help

where to put method that works on a model

I'm working with Django.

I have a model called Agrument. Arguments have sides and owners. I have a function that returns back the side of the most recent argument of a certain user.

like obj.get_current_side(username)

I've added this to the actual Argument model like this

 def get_current_side(self, user):
        return self.argument_set.latest('pub_date').side

I am starting to think this doesn't make sense because there may not be an instance of an Argument. Is this a place I would use a class method? I thought about making a util class, but I'm thinking that it makes sense to be associated with the Argument class.

From stackoverflow
  • I think what you are looking for are model managers.

    Django docs on managers with managers you can add a function to the model class instead of a model instance.

  • It would make more sense to have instance methods on the User model:

    def get_current_side(self):
        try:
            return self.arguments.latest('pub_date').side
        except User.DoesNotExist, e:
            return None
    

    You can do this by extending the User model as explained here:

    Edit: I'm not exactly sure which exception gets thrown.

  • This should be a method on a custom model manager:

    # in models.py
    class ArgumentManager(models.manager.Manager):
        def get_current_side(self, user):
            try:
                return self.filter(user=user).latest('pub_date').side
            except Argument.DoesNotExist:
                return None
    
    class Argument(models.Model):
        # fields etc...
    
        objects = ArgumentManager()
    
    
    # Calling:
    
    side = Argument.objects.get_current_side(user)
    

    Alternaticely you can extend contrib.auth.user and add get_current_size() on it. But I wouldn't mess with it until I'm very confident with Django.

    BTW: Most of the code in this page is wrong; for example user variable is not used at all on the OP's snipplet.

Is there a way to declare timeout period with NUnit?

I would like to be able to fail the test if the executing code hangs. Is there a way to do this currently?

I am thinking something like the following must exist, but I can't seem to find it in the API

[Test, Timeout(TimeSpan.FromSeconds(2))]
public void Test() { ...}
From stackoverflow
  • AFAIK there's nothing built into NUnit that'll do this but it should be easy enough to do with DateTime (or performance counters if you're wanting higher resolution timers)...

    George Mauer : AFAIK == As Far As I Know? Man we're getting silly with these internet acronyms. Thanks though!
  • Are you using NUnit 2.5? A TimeoutAttribute was added in NUnit 2.5 that does exactly what you want, though you specify the timeout in milliseconds. See the release notes.

    George Mauer : Didn't even know that a new one is out, upgrading is fun

Why would people use pure XML databases over plain RDBMs?

How many of you are actually using pure XML databases over RDBMs? The former seem to be gaining momentum, but I don't understand the advantage. Anyone cares to explain?

From stackoverflow
  • What do you mean by "pure XML database"? If you mean a text file containing XML, that is not really a database, as it lacks almost all the features of a true database, such as transactions, locking, mult-user access, security etc. etc.

    Dervin Thunk : http://en.wikipedia.org/wiki/XML_database, particularly the second flavor. There's even books about it (see Amazon) and lots of implementations...
    Shog9 : database != RDBMS... me, i prefer CSV file databases.
    anon : I wasn't talking about relational databases - OODBs have all the features I mentioned.
  • Take a look at this article i guess it might help you.

  • Conditions that suggest XML isn't a crazy idea

    • If your data looks like a collection of documents. For example, novels have structure, e.g. chapters, paragraphs, sentences, words. You might want to access the structure programatically, but it would be hard to make a relational schema that would support that.

    • A mind boggling number of fields and tables required, almost all are optional. For example, not all novels have a villain, but a villain attribute or tag would be easy enough to add to an xml document.

    • If you have a fairly small amount of data.

    • Data is strongly hierarchical. It is easier to query an XML document of a organizational chart than to do the similar query on an employee table with a manager column that links to itself.

    Example- DasBlog which uses plain ole xml as the datastore.

    Conditions that suggest a relational model is better

    • Most of your data fits nicely into tables and columns, fairly small numbers of fields, most fields are required.

    • There is a lot of data. The Relational world has been optimizing for performance much longer than the XML database world.

    You can have it both ways

    • Most modern relational databases support xml as a first class data type.
    KLE : +1 for good answer
  • I fail to see the real benefits of using XML over relational databases without the context.

    Relational databases are often clumsy to operate, even though you can easily query about with any requirements from them. If you don't need to do queries on your data then you don't have any real use from RDBMs. How many photos have you seen wrapped into a relational database? Isn't it more convenient to store photos in jpeg or png? In other hand, do you like to store image pixels in XML?

    StaxMan : Keep in mind that (native) XML databases can do queries as well, usually using XQuery.
    Leonel : One argument for storing images as byte arrays inside a RDBMS is that the backup is easier to handle. I used to work on a web app where we stored images from users in a folder on the file system. It was painful to have two different backup routines, one for the DB and one for files.
  • As someone who works on an open source product that heavily uses XML databases, I find XML data sources invaluable as they are representations of the pure data structures used in the program.

    XML allows me to model a complex structure in code then serialize it directly to XML to be read in either else where or at a later date, or my specifically in my case export a complex structure in XML then manipulate and query it in memory. There are other options such as ODBMS and ORM which offer many of the same advantages (and then some more), but come with a knowledge or performance overhead.

  • One advantage that hasn't been mentioned is it's much easier if the data is supposed to be available to the public (via RSS or whatever). If the main use for the data is for some kind of public API, or if it is going to be formatted as XML later anyways, then why not? Say you wanted to store some HTML templates. Wouldn't it be easier to store it as HTML? You'd also save the overhead of an RDBMS and processing the data into XML.

    XML is also okay if you do almost all reading and very little writing, although a simpler format such as JSON might be more efficient depending on what you're doing.

    In any other case, especially if there is a lot of data manipulation involved, real databases (whether relational, object-oriented, document-oriented, whatever) are going to be much more efficient because they're built for that. XML wasn't meant for 100,000,000 rows of data.

    I think the reason some databases use XML is because it's such a widely-used format, especially for things like RSS feeds. Like I said before, if your data needs to be XML in the end, then why not store it as XML and make your life easier?

XSLT: Foreach iterates for each item, but displays the value of the first item?

Hi,

I have a item list and for each item I want to make it an url.

List:

 <root>
   <tags>
     <tag>open source</tag>
     <tag>open</tag>
     <tag>advertisement</tag>
     <tag>ad</tag>
   </tags>
 </root>

XSLT:

<xsl:template match="*">
    <div class="tags">
      <xsl:for-each select="/post/tags/tag">
          <a href="#">
            <xsl:value-of select="//tag"/>
          </a>
      </xsl:for-each>
    </div>
</xsl:template>

Output:

  <div class="tags"> 
    <a href="#">open source</a> 
    <a href="#">open source</a> 
    <a href="#">open source</a> 
    <a href="#">open source</a> 
  </div>

What am I doing wrong?

From stackoverflow
  • What you are doing with the value-of expression is selecting all of the tag nodes in the xml document:

    <xsl:value-of select="//tag"/>
    

    The effect of that is that only the first selected node will be used for the value.

    You can use the following instead:

    <xsl:value-of select="."/>
    

    Where select="." will select the current node from the for-each.

    pre63 : Thank you! I tried at first and it did nothing...
    Oded : Thanks for letting me know - removed the incorrect usage.
  • A more XSLT way of doing the correct thing is add a "tag" template and modify your original:

    <xsl:template match="*">
        <div class="tags">
            <xsl:apply-templates select="tag" />
        </div>
    </xsl:template>
    
    <xsl:template match="tag">
          <a href="#">
            <xsl:value-of select="."/>
          </a>
    </xsl:template>
    

Providing an Edit Button in Oracle Portal 10.1.4 in a Custom Search Portlet

I am using Oracle Portal 10.1.4 and I have a custom search portlet that is effectively doing a canned search that lists content that matches a certain category. I want administrators to be able to click an edit button beside a piece of content that shows up in the results list of the canned custom search portlet. (So each piece of content that comes back from the canned search has an edit link beside it that admins can click on).

Is there a way to do this WITHOUT having to write my own PL/SQL portlet (to replace the custom search portlet)?

From stackoverflow
  • As far as I know, to manipulate the output of the standard custom search portlet is not possible and writing your own search portlet in PL/SQL or Java would be the way to go. The only other method I can think of would be to intercept the output of the portlet and augment it somehow, or use javascript to manipulate the DOM to add the required edit links.

    Richard.

jQuery asp dropdownlist

Hi, I have one control with some asp form elements, but how do I get the form elements in my other control. I usually do jQuery('#<%= MyDropDownList.ClientID %>).val(), but this is not possible since it is in the other control. How do I access the element without hardcoding it?

So to sum it up: Control 1: Asp:DropDownList Control 2: Needs access to the value in the asp:DropDownlist from control 1 through jQuery.

From stackoverflow
  • You can get value for the dropdown inside Control1 using the the following jQuery selector.

    $('#<%= Control1.ClientID%> select').val()
    
    Dofs : I know the value inside control1 can be get by the code you wrote, but what if I want it from control 2?
    Jose Basilio : I am assuming that both controls are in the same page and the jQuery is part of the page. If the jQuery is part of each individual control, then you may to assign a unique CssClass property to the dropdown in Control1 that you can uniquely identify it from Control2.
    Jose Basilio : If this does not help, please post some of your code code.
  • If the names are unique, you can use the selector does name matching on the end of the id.

    $('[id$="DropDownList1"]').val();
    

    This will match all controls whose id ends with DropDownList1 and get the value of the first one. If the name is unique, then it will be your other dropdown list.

is it possible to print all reductions in Haskell - using WinHugs?

I have written the following function.. and executed using WinHugs

teneven = [x | x <- [1..10], even x]

My output :

Main> teneven
[2,4,6,8,10] :: [Integer]
(63 reductions, 102 cells)

is there anyway to print all the reductions.. so I can learn the core evaluation happening inside WinHugs?

From stackoverflow
  • Believe me, you dont want to go this way.

    Set (and order) of reductions used in each particular case would depend on particular language implementation (hugs could do it one way, ghci - in other way, jhc - in yet another, etc).

    Better read something about general ways to implement compiler/interpreter/virual machine for functional language - like SECD machine, etc.

    Several links:

  • Some ideas:

    1. The debug command-line option (which you can set with :set +d in Hugs) is informative, but is very verbose and does not show you the reductions in Haskell syntax.

    2. Try Hat - the Haskell Tracer. I just tried it on a simple program and it's pretty cool. I'm not on Windows, though, and I don't know how difficult it would be to get it running. It's likely fairly difficult, which is a shame since it's cool and essentially what you want. If you do get it running, you can get something like this information from Hat:

      main = {IO}
      teneven = [2,4,6,8,10]
      _foldr (\..) [1,2,3,4,5,6,7,8, ...] [] = [2,4,6,8,10]
      (\..) 1 [2,4,6,8,10] = [2,4,6,8,10]
      (\..) 2 [4,6,8,10] = [2,4,6,8,10]
      (\..) 3 [4,6,8,10] = [4,6,8,10]
      (\..) 4 [6,8,10] = [4,6,8,10]
      (\..) 5 [6,8,10] = [6,8,10]
      (\..) 6 [8,10] = [6,8,10]
      (\..) 7 [8,10] = [8,10]
      (\..) 8 [10] = [8,10]
      (\..) 9 [10] = [10]
      (\..) 10 [] = [10]
      

      The lambda there is even. Also, if you want, Hat can trace into calls of foldr and other internal calls; by default, it doesn't do that.

  • Hi there!

    import Debug.Trace
    fact :: Integer -> Integer
    fact 0 = trace "fact 0 ->> 1" 1
    fact n = trace ("fact " ++ show n) (n * fact (n-1))
    

    or

    import Hugs.Observe
    fact :: Integer -> Integer
    fact 0 = observe "fact 0" 1
    fact n = observe "fact n" (n *  fact (n-1))
    

Why would shl_load() fail for libraries with Thread Local Storage?

Threads in Perl by default take their own local storage for all variables, to minimise the impact of threads on existing non-thread-aware code. In Perl, a thread-shared variable can be created using an attribute:

use threads;
use threads::shared;

my $localvar;
my $sharedvar :shared;

HP-UX runtime loader does not support dynamic load of shared libraries that contain (TLS) thread local storage.
As a result, when attempting to import modules that contain TLS the following error is reported:

"/usr/lib/dld.sl: Can't shl_load() a library containing Thread Local Storage"

So I know why I am getting an error I am just unclear as to why it would be difficult to load a library with TLS?

From stackoverflow
  • The way TLS storage is set up depends on the TLS access model.

    In the simpler "Initial executable / static TLS" model, the loader sets up TLS segment before the first instruction of the main executable is run. It computes the size of that segment by adding up TLS requirements of the main executable and all shared libraries it directly depends on.

    Once this TLS segment is allocated and set up, the application starts running, and may well store pointers into the TLS segment. Hence it's impossible to realloc() storage for the segment -- the loader would have no idea which pointers in the application must be updated.

    Since you can't realloc the segment, and since there is no space in it for additional variables; how can loader deal with dynamically loaded libraries which require TLS storage of their own?

    The glibc loader actually allocates some extra space in the initial TLS, and so it can dynamically load libraries with TLS, provided they don't use too much space. Once this reserve is exhausted, glibc loader will also refuse to load any additional libraries with TLS requirements.

    On Solaris and Linux it is possible to dynamically load libraries with arbitrary TLS requirements, using the "General Dynamic TLS model".

    It looks like HP-UX v1.6 also supports that model, and in fact makes it the default. But you are probably running an older OS release, where this model is not the default, and may not be supported at all. Check if your compiler version supports +tls=dynamic option, and if so whether building with it helps.

Image comparison - fast algorithm

I'm looking to create a base table of images and then compare any new images against that to determine if the new image is an exact (or close) duplicate of the of the base. For example: if you want to reduce storage of the same image 100's of times, you could store one copy of it and provide reference links to it. When a new image is entered you want to compare to an existing image to make sure it's not a dup...ideas?

(one of mine was to reduce to a small thumbnail and then randomly pick 100 pixel locations and compare...

From stackoverflow
  • Just compare the md5 sums of the files?

    alvatar : That's not addressing the point of similar-but-not-equal images...
    cartman : Well in that case you can't be really sure about similarity anyway, best way would be to use hashes.
  • If you have a large number of images, look into a Bloom filter, which uses multiple hashes for a probablistic but efficient result. If the number of images is not huge, then a cryptographic hash like md5 should be sufficient.

    meade : So (trying to understand the Bloom filter) - does that mean you select random pixel points on the base image, randomly get either a red/green/blue value of the pixel - then compare to the new image? and then use a probability level (90% match) to determine how similar the two images are?
    jdigital : This isn't a similarity check, it's an equivalence check. If you need similarity, then hashing is not the right approach. The idea behind Bloom is to use multiple hash algorithms to increase the likelihood of unique identification. Selecting random points isn't the best approach for a hashing algorithm because it will yield different results each time.
  • Picking 100 random points could mean that similar (or occasionally even dissimilar) images would be marked as the same, which I assume is not what you want. MD5 hashes wouldn't work if the images were different formats (png, jpeg, etc), had different sizes, or had different metadata. Reducing all images to a smaller size is a good bet, doing a pixel-for- pixel comparison shouldn't take too long as long as you're using a good image library / fast language, and the size is small enough.

    You could try making them tiny, then if they are the same perform another comparison on a larger size - could be a good combination of speed and accuracy...

  • This is beautiful problem to look for the openCV library...

    I think to accomplish detection of similar but no equal images, you should try with a combination of image analysis algorithms (histograms and the like). Anyway, I would take a look at this thread at Gamedev.

  • As cartman pointed out, you can use any kind of hash value for finding exact duplicates.

    One starting point for finding close images could be here. This is a tool used by CG companies to check if revamped images are still showing essentially the same scene.

  • Below are three approaches to solving this problem (and there are many others).

    • The first is a standard approach in computer vision, keypoint matching. This may require some background knowledge to implement, and can be slow.

    • The second method uses only elementary image processing, and is potentially faster than the first approach, and is straightforward to implement. However, what it gains in understandability, it lacks in robustness -- matching fails on scaled, rotated, or discolored images.

    • The third method is both fast and robust, but is potentially the hardest to implement.

    Keypoint Matching

    Better than picking 100 random points is picking 100 important points. Certain parts of an image have more information than others (particularly at edges and corners), and these are the ones you'll want to use for smart image matching. Google "keypoint extraction" and "keypoint matching" and you'll find quite a few academic papers on the subject. These days, SIFT keypoints are arguably the most popular, since they can match images under different scales, rotations, and lighting. Some SIFT implementations can be found here.

    One downside to keypoint matching is the running time of a naive implementation: O(n^2m), where n is the number of keypoints in each image, and m is the number of images in the database. Some clever algorithms might find the closest match faster, like quadtrees or binary space partitioning.


    Alternative solution: Histogram method

    Another less robust but potentially faster solution is to build feature histograms for each image, and choose the image with the histogram closest to the input image's histogram. I implemented this as an undergrad, and we used 3 color histograms (red, green, and blue), and two texture histograms, direction and scale. I'll give the details below, but I should note that this only worked well for matching images VERY similar to the database images. Re-scaled, rotated, or discolored images can fail with this method, but small changes like cropping won't break the algorithm

    Computing the color histograms is straightforward -- just pick the range for your histogram buckets, and for each range, tally the number of pixels with a color in that range. For example, consider the "green" histogram, and suppose we choose 4 buckets for our histogram: 0-63, 64-127, 128-191, and 192-255. Then for each pixel, we look at the green value, and add a tally to the appropriate bucket. When we're done tallying, we divide each bucket total by the number of pixels in the entire image to get a normalized histogram for the green channel.

    For the texture direction histogram, we started by performing edge detection on the image. Each edge point has a normal vector pointing in the direction perpendicular to the edge. We quantized the normal vector's angle into one of 6 buckets between 0 and PI (since edges have 180-degree symmetry, we converted angles between -PI and 0 to be between 0 and PI). After tallying up the number of edge points in each direction, we have an un-normalized histogram representing texture direction, which we normalized by dividing each bucket by the total number of edge points in the image.

    To compute the texture scale histogram, for each edge point, we measured the distance to the next-closest edge point with the same direction. For example, if edge point A has a direction of 45 degrees, the algorithm walks in that direction until it finds another edge point with a direction of 45 degrees (or within a reasonable deviation). After computing this distance for each edge point, we dump those values into a histogram and normalize it by dividing by the total number of edge points.

    Now you have 5 histograms for each image. To compare two images, you take the absolute value of the difference between each histogram bucket, and then sum these values. For example, to compare images A and B, we would compute

    |A.green_histogram.bucket_1 - B.green_histogram.bucket_1| 
    

    for each bucket in the green histogram, and repeat for the other histograms, and then sum up all the results. The smaller the result, the better the match. Repeat for all images in the database, and the match with the smallest result wins. You'd probably want to have a threshold, above which the algorithm concludes that no match was found.


    Third Choice - Keypoints + Decision Trees

    A third approach that is probably much faster than the other two is using semantic texton forests (PDF). This involves extracting simple keypoints and using a collection decision trees to classify the image. This is faster than simple SIFT keypoint matching, because it avoids the costly matching process, and keypoints are much simpler than SIFT, so keypoint extraction is much faster. However, it preserves the SIFT method's invariance to rotation, scale, and lighting, an important feature that the histogram method lacked.

    Update:

    My mistake -- the Semantic Texton Forests paper isn't specifically about image matching, but rather region labeling. The original paper that does matching is this one: Keypoint Recognition using Randomized Trees. Also, the papers below continue to develop the ideas and represent the state of the art (c. 2010):

    meade : The Histogram approach seems to make the most sense. I'm assuming you can rotate the image to perform this on all sides just in case the image being compared to was turned (treating the same image as 4) - thanks
    redmoskito : @meade That's right. Something else to consider: depending on your problem, you might not need to use all 5 histograms in your algorithm. Discarding the texture direction histogram will allow you to match rotated versions of the picture. Discarding the texture scale histogram will allow you to match re-scaled versions of the image. You'll lose some ability to compare similarity, but this might not be a problem, depending on your situation. Also, since computing texture information is the most costly part of the algorithm, this will make your algorithm speedy, too.
  • I have an idea, which can work and it most likely to be very fast. You can sub-sample an image to say 80x60 resolution or comparable, and convert it to grey scale (after subsampling it will be faster). Process both images you want to compare. Then run normalised sum of squared differences between two images (the query image and each from the db), or even better Normalised Cross Correlation, which gives response closer to 1, if both images are similar. Then if images are similar you can proceed to more sophisticated techniques to verify that it is the same images. Obviously this algorithm is linear in terms of number of images in your database so even though it is going to be very fast up to 10000 images per second on the modern hardware. If you need invariance to rotation, then a dominant gradient can be computed for this small image, and then the whole coordinate system can be rotated to canonical orientation, this though, will be slower. And no, there is no invariance to scale here.

    If you want something more general or using big databases (million of images), then you need to look into image retrieval theory (loads of papers appeared in the last 5 years). There are some pointers in other answers. But It might be overkill, and the suggest histogram approach will do the job. Though I would think combination of many different fast approaches will be even better.

2d Platformer physics

It was a long holiday weekend, so I got the coding bug again and started playing around:

Mario

I wrote a basic tile engine, but having never attempted this before, I am really struggling with handling sprite collision detection and implementing realistic physics for gravity.

For any other game hobby writers, can you point me towards some walkthroughs on the best way to approach this?

Update:

I thought I'd share a progress report:

http://www.youtube.com/watch?v=-RKNQ2UiiLY <-- Game in Action

Its still really buggy, but collision detection is mostly working, I've started working on some other features (such as bumping the blocks (notice the bug) and interacting with the enemies).

Mario still walks like he is on the moon, I'm using these constants, any advice for tweaking them for more realism?

    const float AirDrag = 1.00f;
    const float GroundFriction = .97f;
    const float Gravity = 0.8f;
From stackoverflow
  • Some of the tutorials here will probably help you:

    http://www.gamedev.net/reference/

  • That may be a detour, but try the Platformer starter kit from XNA 3.0, that contains stuff like Physics and basic Collision detection. You will need to change stuff to make it work outside of XNA, but it's not rocket science.

    XNAGS 3.0 download

  • Download the FarseerPhysics engine, have a look at how it works http://www.codeplex.com/FarseerPhysics I think it's the best thing available for XNA/Silverlight!

  • Gravity is easy:

    const gravity = ... ; // pixels per timestep (eg. video frame) squared
    // while in freefall, each timestep:
    y_velocity += gravity;
    y_pos += y_velocity;
    

    Mind you, most 2d platform games I've played don't have realistic gravity. Just do whatever makes the game fun!

    gnovice : Technically, I think you should update position first (so it uses the velocity of the previous timestep), then update velocity.
  • Ever heard of GameMaker?

    FlySwat : That takes all the fun out of it.
  • jnrdev might be of some assistance. It covers tile collision/response and slopes. It's not the best code I have ever seen, but it gets the job done.

    grepsedawk : Thanks for that link. I was going to recommend it as well but it has since been lost over the years in my bookmarks :)
  • yawnz dear god.... write code not drag and drop... Amen! :P

  • I don't know what you're using for a physics model, but physics models that use fluid drag were recently addressed in another SO question. I won't repeat everything that I gave in my answer, I'll just link to it.

    To summarize, the OP for the question wanted to accelerate an object from rest to a maximum velocity. I went through a few derivations for modeling velocity as a function of time for two different types of drag. Your situation may be slightly different, so the integrals used may have different forms or need to be solved with different initial conditions, but hopefully my answer will point you in some informative directions.

  • There are a couple of really useful 2-d platformer tutorials at http://www.metanetsoftware.com/technique/tutorialA.html and http://www.metanetsoftware.com/technique/tutorialB.html. I think they've been referenced by others elsewhere on SO. They cover collision detection and response, raycasting, various optimisation techniques etc. and have a good explanation of the theory behind it all for those (like me) who are less mathematically inclined. It doesn't go as far as stuff like rigid body dynamics, but I don't think you'd need that for the type of game you are writing (though it would of course be cool if you added this sort of stuff...)

    bm212 : by the look of it, the tutorials I suggested cover similar stuff to jnrdev mentioned by Zack Mulgrew. I haven't read jnrdev yet (beyond an initial glance) so can't really compare them though.
  • Your bug with the multiple blocks being bumped, you could fix that by only bumping the block that is most aligned with the playersprite, or has the least offset. Be sure not to limit it to just one direction. Blocks can actually be bumped from any direction in Mario. (Above by doing a ground pound in same games, or the drill-spin-thing) (Sides by using a shell)