Thursday, April 14, 2011

[PHP] AMFPHP + Zend Session

Hi, I have tried to use ZendSession class with AMFPHP, but AMFPHP uses raw PHP sessions in some of its functions and methods. I strongly need to manage all session variables by ZendSession. Is there any smart method to get it, instead of hack and modify original AMFPHP code?

From stackoverflow
  • I know this is not the answer to your question, but did you take a look at http://framework.zend.com/download/amf ?

  • I've commented text above, but maybe it's not visible for first look, so I write it once again. I'm not interested in ZendAMF, because it seems to be very slow... But thanx for the answer :) So - any other idea?

    Simon Groenewolt : If you want a comment like yours to be visible it is best to edit your original question - that way everyone will see it. Do not use the answers to make additional comments, because the order can change.

Is it faster to search for a large string in a DB by its hashcode?

If I need to retrieve a large string from a DB, Is it faster to search for it using the string itself or would I gain by hashing the string and storing the hash in the DB as well and then search based on that?

If yes what hash algorithm should I use (security is not an issue, I am looking for performance)

If it matters: I am using C# and MSSQL2005

From stackoverflow
  • If you use a fixed length field and an index it will probably be faster...

  • In general: probably not, assuming the column is indexed. Database servers are designed to do such lookups quickly and efficiently. Some databases (e.g. Oracle) provide options to build indexes based on hashing.

    However, in the end this can be only answered by performance testing with representative (of your requirements) data and usage patterns.

  • If your strings are short (less than 100 charaters in general), strings will be faster.

    If the strings are large, HASH search may and most probably will be faster.

    HashBytes(MD4) seems to be the fastest on DML.

  • Though I've never done it, it sounds like this would work in principle. There's a chance you may get false positives but that's probably quite slim.

    I'd go with a fast algorithm such as MD5 as you don't want to spend longer hashing the string than it would have taken you to just search for it.

    The final thing I can say is that you'll only know if it is better if you try it out and measure the performance.

  • Are you doing an equality match, or a containment match? For an equality match, you should let the db handle this (but add a non-clustered index) and just test via WHERE table.Foo = @foo. For a containment match, you should perhaps look at full text index.

  • I'd be surprised if this offered huge improvement and I would recommend not using your own performance optimisations for a DB search.

    If you use a database index there is scope for performance to be tuned by a DBA using tried and trusted methods. Hard coding your own index optimisation will prevent this and may stop you gaining for any performance improvements in indexing in future versions of the DB.

  • I am confused and am probably misunderstanding your question.

    If you already have the string (thus you can compute the hash), why do you need to retrieve it?

    Do you use a large string as the key for something perhaps?

    Sruly : Good point. I think i didnt make myself clear. I have the string but I want to retrive other information related to it that is stored in the DB.
    Lasse V. Karlsen : Then why not consider using something other than the string to find those related things? But in any case, I agree with the top answer (atm), you should test and measure.
  • First - MEASURE it. That is the only way to tell for sure.
    Second - If you don't have an issue with the speed of the string searching, then keep it simple and don't use a Hash.

    However, for your actual question (and just because it is an interesting thought). It depends on how similar the strings are. Remember that the DB engine doesn't need to compare all the characters in a string, only enough to find a difference. If you are looking through 10 million strings that all start with the same 300 characters then the hash will almost certainly be faster. If however you are looking for the only string that starts with an x, then i the string comparison could be faster. I think though that SQL will still have to get the entire string from disc, even if it then only uses the first byte (or first few bytes for multi byte characters), so the total string length will still have an impact.

    If you are trying the hash comparison then you should make the hash an indexed calculated column. It will not be faster if you are working out the hashes for all the strings each time you run a query!

    You could also consider using SQL's CRC function. It produces an int, which will be even quicker to comapre and is faster to calculate. But you will have to double check the results of this query by actually testing the string values because the CRC function is not designed for this sort of usage and is much more likly to return duplicate values. You will need to do the CRC or Hash check in one query, then have an outer query that compares the strings. You will also want to watch the QEP generated to make sure the optimiser is processing the query in the order you intended. It might decide to do the string comparisons first, then the CRC or Hash checks second.

    As someone else has pointed out, this is only any good if you are doing an exact match. A hash can't help if you are trying to do any sort of range or partial match.

    : Well, the hash value is a number, so it's always faster to compare a single number to another number than it is to compare strings. Even in your example of the only string starting with an x, it still needs to compare Ascii values.
    pipTheGeek : The Hash value isn't a single number, Its a varbinary. And isn't the ascii value for x a number?
  • TIP: if you are going to store the hash in the database, a MD5 Hash is always 16 bytes, so can be saved in a uniqueidentifier column (and System.Guid in .NET)

    This might offer some performance gain over saving hashes in a different way (I use this method to check for binary/ntext field changes but not for strings/nvarchars).

  • The 'ideal' answer is definitely yes. String matching against an indexed column will always be slower than matching a hashvalue stored in an index column. This is what hashvalues are designed for, because they take a large dataset (e.g. 3000 comparison points, one per character) and coalesce it into a smaller dataset, (e.g. 16 comparison points, one per byte).

    So, the most optimized string comparison tool will be slower than the optimized hash value comparison.

    However, as has been noted, implementing your own optimized hash function is dangerous and likely to not go well. (I've tried and failed miserably) Hash collisions are not particulrly a problem, because then you will just have to fall back on the string matching algorithm, which means that would be (at worst) exactly as fast as your string comparison method.

    But, this is all assuming that your hashing is done in an optimal fashion, (which it probably won't be) and that there will not be any bugs in your hashing component (which there will be) and that the performance increase will be worth the effort (probably not). String comparison algorithms, especially in indexed columns are already pretty fast, and the hashing effort (programmer time) is likely to be much higher than your possible gain.

    And if you want to know about performance, Just Measure It.

Strange error while executing a C code in MSVC 2005

Hi,

I am facing following quirky error.

I have a workspace in MSVS2005 of all C code. I have declared a global variable in one C file.(file1.c) This file has function main() in which, I initilaize the value of that variable = 0.In other C file(file2.c). From main there is a function call to a function(func1 in file2.c) which sets the value of this global variable to 1. In file2.c I have declared the global variable as "extern .." and accessed it. But what i noticed is that in the main function moment the code execution enter the function func2, I see in the watch window that the address of that global variable itself is changed to a totally different address(In watch window I am watching &variable). As a result, when the value of that variable is set to 1, it writes the value 1 to altogether different memory address itself. So when later I use this variable to check in a if condition(if variable == 1), it still shows value of 0 and does not go satisfy the if condition and does not take that code path , where it was expected to have taken that path.

Workaround: I declared that variable in one of my exisitng global structure, and then accessed this variable for doing the same operations; the code works as expected.

So what could be the explanation for the error which is causing the address of the global variable to be changed if its declared as a global in some C file? It does not matter in which *.c file i declare it and which file I access it using "extern" , the result is same global variable address change and subsequent errorneous operation.No optimization option is enabled.

Thanks,

-AD

From stackoverflow
  • Maybe try declaring it volatile (not sure if that's even valid for globals) and disable any compiler optimizations in case it's getting tricky somehow.

  • If the variable has a different address in different translation units, you are not seeing one but at least two variables with the same name.

    Most common cause: You may have accidently declared a local variable on the stack with the same name. Check your code for this. If the variables are really global the linker should complain if two translation units contain the same symbol.

    If this does not help, and if you still see multiple copies of the same symbol-name it's probably best to take a look at the map file (can be enabled in the linker-settings).

    All external symbols are listed there with their name, address and (most important in your case) the object-file that contained them.

    The addresses in the map-file may be just offsets. In this case do all your calculations relative to a symbol that is known to exist only once. the main() entrypoint might be good for this.

  • Probably a typo or some thing similar in your code. Try this working demo:

    file1.c

    int variable;
    void fun1(int k);
    
    int main()
    {
        printf("%d\n", variable);
        fun1(4);
        printf("%d\n", variable);
    }
    

    file2.c

    extern int variable;
    
    void fun1(int k)
    {
        variable = k;
    }
    

    Output:

    0
    4
    

    To compile:

    cl.exe file1.c file2.c
    
  • Can only guess without actually seeing the code, but here are 2 possibilities:

    1. the global variable is being hidden by a local in either main() or func2() (or maybe func1() - the question mentions func1() but I suspect that's a typo - this is why cutting and pasting code is quite important);
    2. you are mistakenly declaring the global variable as static in file1.c and have an initializer on your extern declaration in file2.c. Having an initializer on the extern declaration will cause that declaration to be a definition, too.

A non-deprecated exact equivalent of Date(String s) in Java?

I have old code that uses new Date(dateString) to parse a date string. Compiling the code produces the deprecation warning Date(java.lang.String) in java.util.Date has been deprecated.

The javadoc unhelpfully advises me to use DateFormat.parse(), even though the DateFormat class does not have a static parse method.

Now, I know how to use SimpleDateFormat, but I want to make sure I'm getting the exact same behaviour of the deperecated Date constructor.

From stackoverflow
  • DateFormat has static methods that return DateFormat instances. I don't know which one (if any) has the same behavior as Date(String s) but here you go:

    DateFormat.getInstance()
    DateFormat.getDateInstance()
    DateFormat.getTimeInstance()
    DateFormat.getDateTimeInstance()
    
    itsadok : None of these seem to work
  • Here's my guess (I posted as community wiki so you can vote up if I'm right):

    Date parsed = new Date();
    try {
        SimpleDateFormat format =
            new SimpleDateFormat("EEE MMM dd HH:mm:ss zzz yyyy");
        parsed = format.parse(dateString);
    }
    catch(ParseException pe) {
        throw new IllegalArgumentException();
    }
    
    Paul Tomblin : Yeah, that's pretty much how I'd do it if I wanted to get exactly the same behaviour. The reason why DateFormat.getInstance() is better is it returns the appropriate formatter for the current locale.
    sleske : Please, pretty please never, ever do a "new IllegalgArgumentException()" :O . At the very least chain the original exception (new IllegalArgumentException(pe)).
  • Short answer (before further investigation) is: no, it is not equivalent. the Date(String toParse) constructor is equivalent to the parse method of the class Date (which is also deprecated)... And the javadoc of this method claims:

    Note that this is slightly different from the interpretation of years less than 100 that is used in SimpleDateFormat.

    If it is the only change, I guess you can go on this way.

  • SimpleDateFormat is the way to go. Can I point out, however, that you may feel compelled to define one SimpleDateFormat instance and build Date objects using this. If you do, beware that SimpleDateFormat is not thread-safe and you may be exposing yourself to some potentially hard-to-debug issues!

    I'd recommend taking this opportunity to look at Joda which is a much better thought-out (and thread-safe) API. It forms the basis of JSR-310, which is the new proposed Java Date API.

    I understand this is a bit more work. However it's probably worthwhile given that you're having to refactor code at the moment.

  • If you take a look at source of the Date.parse(String s) method that Nicolas mentions, you'll see that it will be difficult or impossible to construct a date format that exactly reproduces the behavior.

    If you just want to eliminate the warning, you could put @SuppressWarnings({“deprecation”}) outside the method calling the Date(String) constructor.

    If you really want to ensure future access to this behavior with future JREs, you might be able to just extract the method from the JDK sources and put it into your own sources. This would require a careful read of the source code licenses and consideration of their application to your specific project, and might not be permissible at all.

Wordpress previous_posts_link() leads to a 404 error not found

Below is the code I am using. I have tried everything I have been able to find and it still doesn't work. My permalink structure is /%category%/%postname%/. I believe that the url is correct that it is trying to go to i.e. http://localhost:8888/wordpress/blog/page/2. Annoyingly, the exact same code works on another site I have designed previously.

Could someone point me in the right direction please? Thanks

<?php get_header(); ?>
 <div id="content" class="narrowcolumn">
 <?php 
  $paged = (get_query_var('paged')) ? get_query_var('paged') : 1;
  query_posts("cat=3&showposts=2&paged=" . $paged);

  $wp_query->is_archive = true; $wp_query->is_home = false;
 ?>

 <?php if (have_posts()) : ?>
 <div id="lefttop"></div>

 <div id="blogpoint">
 <div id="leftcol">
  <?php while (have_posts()) : the_post(); ?>

   <div id="leftsquidge">
    <h2><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2><br /><br />

     <?php the_excerpt(); ?>
   </div> 
   <div id="rightsquidge">
    <?php the_tags( '<p><strong>File under:</strong> ', ', ', '</p>'); ?>
    <?php the_time('F jS, Y') ?>  by <strong><?php the_author() ?></strong>
   </div>
   <div style="clear:both;"></div> 
   <br /><br />
  <?php endwhile; ?>
  <div class="navigation" style="padding:0px;margin:0px;">
   <div class="alignleft"><?php next_posts_link('&laquo; Older Entries') ?></div>
   <div class="alignright"><?php previous_posts_link('Newer Entries &raquo;') ?></div>
  </div>
 <?php endif; ?> 
  <div style="clear:both;"></div> 
  </div>

  </div>
  <div id="leftbot"></div>
 </div>

<?php get_sidebar(); ?>

<?php get_footer(); ?>


EDIT

I have answered my own question. It was something I had tried before and wasn't working. You have to create a page, on the dashboard, that uses your category as the template.

From stackoverflow
  • If the same code works fine in another site then check your settings for this site. Compare your permalinks settings on both the sites.

    Does both the site work on the same environment (Apache or iis)?

    Drew : Yeah, the settings are the same on both sites. The site that works runs on apache and this site runs locally at the moment.
    Drew : on a mamp sertup

Graphics.drawImage() in Java is EXTREMELY slow on some computers yet much faster on others

I'm having a strange problem, basically in Java Graphics.drawImage() is extremely slow on some computers and faster on others. This isn't related to the computers power either, some weaker computers run it fine while some stronger ones seem to choke up at the drawImage call.

It may or may not be related to the width and height, I have a very, very large width and height defined (something like 5000 by 2500). I wouldn't think it's the issue except like I said it runs in real time speed on some computers and slower on others and doesn't seem to be tied to the computers relative power.

Both computers have the same version of Java, both use Vista. One has a 1.83ghz Core 2 Duo with 1gb RAM and onboard graphics (runs everything fine), the other has a 2.53 ghz core 2 duo with a 9600GS (latest nVidia drivers) and 4gb of RAM and it literally chugs on the drawImage call.

Any ideas?

edit: ok this is really wierd, I'm drawing the image to a window in Swing, now when I resize the window and make it really small the image gets scaled down too and it becomes small. Suddenly everything runs smoothly, when I scale it back up to the size it was before it's still running smoothly!

It also has multiple monitor issues, if I do the resize trick to make it run faster on one monitor then scroll it over to another monitor when more than half of the window is in the new monitor it starts chugging again. I have to resize the window again to small then back to its original size to get back the speed.

If I do the resize trick on one monitor, move it over to the other it of course chugs, but if I return it back to the original monitor on which I did the resize trick it works 100%

If I have two swing windows open (displaying the same image) they both run slow, but if I do the resize trick on one window they both start running smoothly (however this isn't always the case).

*when I say resize the window I mean make it as small as possible to the point the image can't actually be seen.

Could this be a bug in Java maybe?

From stackoverflow
  • There are several things that could influence performance here:

    • Available RAM
    • CPU speed
    • Graphic card (onboard or seperate)
    • Graphic driver
    • Java version
    • Used video mode (resolution, bitdepth, acceleration support)

    EDIT: Having a look at the edited question, I'd propose to check if the 9600GS system has the newest NVIDIA drivers installed. I recently installed a driver for an Intel onboard graphics card that replaced the generic Windows driver and made moving windows, watching videos, browsing etc. a lot faster.

    All the other specs look good. Perhaps Java doesn't detect the 9600GS and doesn't use hardware acceleration, but I doubt this.

    Also check the OS configuration. On Windows, you can turn off hardware acceleration for debugging purposes.

    Of course the best way to handle this would be to change your code - resize the image or split it up into chunks as DNS proposed. You'll never be able to see the whole image as it is on the screen.

  • How are you judging the computers' power? A 50x25 K 32-bit image takes more than 4.5 GB RAM to hold in memory (50000 * 25000 * 4 bytes). If one computer has more RAM than another, that can make a huge difference in speed, because it won't have to swap to disk as often. You should consider grabbing subsections of the image and working with those, instead of the whole thing.

    Edit: Are you using the latest Java & graphics drivers? If your image is only 5Kx2.5K, the only thing I can think of is that it's doing it without any hardware acceleration.

  • What is different, what is the same ? "Some computers" is too vague - are the operating systems the same ? Same versions ? Are your Java installations all the same version ?

  • Check the screen settings. My bet is that pixel depth is different on the two systems, and that the slow one has an odd pixel depth related to the image object you are trying to display.

  • Since Java uses OpenGL to do 2D drawing, the performance of your app will be affected by the OpenGL performance of the graphics chip in the respective computer. Support for OpenGL is dwindling in the 3D industry, which means that (ironically) newer chips may be slower at OpenGL rendering than older ones - not only due to hardware but also drivers.

  • If you are using sun's java try some of the following system properties, either as command line parameters or the first lines in main

    sun.java2d.opengl=true //force ogl
    sun.java2d.ddscale=true //only when using direct3d
    sun.java2d.translaccel=true //only when using direct3d

    more flags can be viewed at this page Look at sun.java2d.trace which can allow you to

    determine the source of less-than-desirable graphics performance

  • Performance of writing an image to a screen is very much affected by the format in which the image is stored. If the format is the same as the screen memory wants then it can be very fast; if it is not then a conversion must be done, sometimes pixel by pixel, which is very slow.

    If you have any control over how the image is stored, you should store it in a format that the screen is looking for. Here is some sample code:

        GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment();
        GraphicsDevice device = env.getDefaultScreenDevice();
        GraphicsConfiguration config = device.getDefaultConfiguration();
        BufferedImage buffy = config.createCompatibleImage(width, height, Transparency.TRANSLUCENT);
    

    If you are going to draw the image many times it may be worth converting to a compatible format even if it came in some other format.

    Drawing an image will also be slower if you are transforming it as you draw, which the 'resizing' part of your description makes me think you might be.

JMS alternative? something for decoupling sending emails from http reqs

Hi,

we have a web application that does various things and sometimes emails users depending on a given action. I want to decouple the http request threads from actually sending the email in case there is some trouble with the SMTP server or a backlog. In the past I've used JMS for this and had no problem with it. However at the moment for the web app we're doing JMS just feels a bit of an over kill right now (in terms of setup etc) and I was wondering what other alternative there are out there. Ideally I just like something that I can run in-process (JVM/Tomcat), but when the servlet context is unloaded any pending items in the queue would be swapped to disk/db. I could of course just code something together involving an in memory Q, but I'm looking to gain the benfit of opensource projects, so wondering whats out there if anything.

If JMS really is the answer anyone know of somethign that could fit our simple requirements. thanks

From stackoverflow
  • I agree that JMS is overkill for this.

    You can just send the e-mail in a separate thread (i.e. separate from the request handling thread). The only thing to be careful about is that if your app gets any kind of traffic at all, you may want to use a thread pool to avoid resource depletion issues. The java.util.concurrent package has some nice stuff for thread pools.

    toolkit : last time I looked, the use application-created threads was considered non-portable, since different servlet containers might limit the creation of Thread instances.
    Willie Wheeler : I believe that thread creation happens through the SecurityManager, which the admin can configure as desired.
  • You could use a scheduler. Have a look at Quartz.

    The idea is that you schedule a job to start at regular intervals. All requests need to be persisted somewhere. The scheduled job will read them and process them. You need to define the interval between two subsequent jobs to fit your needs.

    This is the recommended way of doing things. Full-fledged application servers offer JEE Timers for this, but these aren't available in Tomcat. Quartz is fine though and you could avoid starting your own threads, which will cause mess in some situations (e.g. in application updates).

  • We have the exact same problem. This may sound a little simplistic but it does work:

    1. Write the request to disk to an "outgoing" mail folder.
    2. Email process reads in the request.
    3. When the message has been sent, the outgoing mail message is deleted.
    4. Plan is to use Amazon S3 as needed to help distribute the message transmission across servers if needed.
  • I'm using JMS for something similar. Our reasons for using JMS:

    • We already had a JMS server for something else (so it was just adding a new queue)
    • We wanted our application be decoupled from the processing process, so errors on either side would stay on their side
    • The app could drop the message in a queue, commit, and go on. No need to worry about how to persist the messages, how to start over after a crash, etc. JMS does all that for you.
  • Wow, this issue comes up a lot. CommonJ WorkManagager is what you are looking for. A Tomcat implementation can be found here. It allows you to safely create threads in a JEE environment but is much lighter weight than using JMS (which will obviously work as well).

  • Since you say the app "sometimes" emails users it doesn't sound like you're talking about a high volume of mail. A quick and dirty solution would be to just Runtime.getRuntime().exec():

    sendmail recipient@domain.com

    and dump the message into the resulting Process's getOutputStream(). After that it's sendmail's problem.

    Figure a minute to see if you have sendmail available on the server, about fifteen minutes to throw together a test if you do, and nothing to install assuming you found sendmail. A few more minutes to construct the email headers properly (easy - here are some examples) and you're done.

    Hope this helps...

  • I would think spring integration would work in this case as well.

    http://www.springsource.org/spring-integration

  • Beyond JMS, for short messages you could also use Amazon Simple Queue Service (SQS). While you might think it an overkill too, consider the fact there's minimal maintenance required, scales nicely, has ultra-high availability, and doesn't cost all that much. No cost for creating new queues etc; or having account. As far as I recall, it's purely based on number of operations you do (sending messages, polling/retrieving).

    Main limitation really is the message size (there are others, like not guaranteeing ordering due to distributed nature etc); but that might work as is. Or for larger messages, using related AWS service, s3, for storing actual body, and just passing headers through SQS.

Redirecting from the doView method in a portlet

I'm using Websphere portal 6.0 and I'm wondering if there's a way in which I can tell the server which page to render from the doView method. I know I can do it from the processAction method but unfortunately the semantics of the problem don't allow it.

Thank you for your help

From stackoverflow
  • I doubt it is possible to send a redirect in doView(). Two reasons for that:

    • For performance and various other reasons, the portal may call doView() after the headers of portal's HTTP response were generated and sent out - thus too late to issue a redirect.
    • It could be pretty "evil" to be able to do that - a portlet's doView() can be called anytime by the portal, without user's interaction with that portlet. Thus a portlet could do the redirect after a random page refresh, or interaction with another portlet.

    In general, I'd say if portlet needs to do a redirect in doView, then it may require a redesign. Perhaps try to describe your problem in more details.

  • As I understand, you want to decide which JSP/HTML page you are going to show to the user.

    In that case, this is what you need to do.

    public void doView(RenderRequest req, RenderResponse res) throws IOException,
    PortletException {
    
        PortletRequestDispatcher prd =
            getPortletContext().getRequestDispatcher("/WEB-INF/jsp/view.jsp");
        prd.include(req, res);
    }
    

    You can decide each time which jsp you want to obtain the request dispatcher for.

How to prevent resale of PHP source?

Do you have a strategy for this? If I sell a web-system to a client and in accordance with the legal agreement, the customer is not allowed to sell it to others, how can I be sure he doesn't do that anyway?

My first idea is some sort of key that must be in the root directory, and that file is only valid for that specific domain.

Other Ideas?

UPDATE 1 I agree that this is mainly a legal problem. But the facts are: I´ve got a client that buys this system from me to sell it to others. And he wants this system to function so it's easy for him to make his profit. The ability to package the web server and sell it is part of the specification.

UPDATE 2 Another one point of view is this. In that case it is hard to prove how much of the reselled software comes from my original system.

UPDATE 3 Obfuscating is not an option for me, a really hate it.

From stackoverflow
  • This is a social problem, not a technical one. You have copyright law on your side; no more should be needed. (Any and all technical solutions would be the equivalent of DRM, which is inherently ineffective.)

    Regarding your update: So basically you become a DRM supplier for this client of yours. So: Does the client understand that DRM is ineffective? Try educating them before wasting time on implementation.
    If the client remains adamant, I'd take a long hard look at what current DRM vendors are doing. E.g. lots of handwaving, some obfuscation, and, erm... I don't know... what else do they do? Either way, you can be certain that any solution you implement will be undone in less than 10% of the time it took you to implement it - so spend as short an amount of time on this as you can get away with. (Before it was edited out, you wrote "It's in the spec" about "being sure that the system isn't sold on": this might mean you've agreed to build something which is technically impossible (you can never be sure), and would require you spending an infinite amount of time building something which comes close...)

    You might investigate having the application contact some central registry when run for the first time (with embedded fingerprint, different for each sale, so you know who passed on their code). That way your client can find out where the application is being run, and has a chance of contacting those who use it without permission. (Potentially turning them into new paying customers.) Maybe give said central repository the ability to send a kill-signal back? That gets really scary though, and liability concerns would be huge; avoid if at all possible.

    Glenn : I have not agreed to build this. That was more a hypothetical statement to get technical answers.
  • The proper way of prohibiting re-sale of your software is via legal constraints, not technical ones. Have your customer sign a contract where they agree not to re-sell.

    Technical prevention measures universally make the product worse, also in the technical sense, and that lessens the value to the customers. The stronger the technical protection is, the bigger the nuisance.

    For example, suppose the customer legitimately wants to change their domain name. Should they have to buy a new copy? I think not. If you tell them how to change the keyfile to match their new domain, they can then use that information to enable them to re-sell. However, the legal protection applies regardless of what technical tricks they come up with.

  • Some use an obfuscator like Zend Guard but honestly I think that technical solutions for this kind of problem are as doomed as DRM is for audio and video content. Fundamentally what you've giving them is meant to work so it's just a technical problem to make it work in ways you don't want.

    Your recourses here are (imho) legal not technical. You have a contract with the client that lays out what they can and can't do. You have a good lawyer draft that contract. If they don't abide by it then you pretty much have to take them to court.

    Don't count on any form of obfuscation or copy protection as any kind of guarantee.

    This is particularly a problem for scripting languages because (Zend notwithstanding), they are fundamentallly plaintext distribution methods. Java and .Net and other bytecode compiled languages have a little more protection but they can be disassembled into intermediate code too (but obfuscation is more useful here). Truly compiled languages (eg c, C++) have the most protection of all since disassembling a 50 meg binary into assembler code typically isn't that useful.

    Even then there are no guarantees. If you're not comfortable with that then you need to carefully select your clients, live with the potential breach of contract (and the possible enforcement that might compel you to pursue) or find another line of work.

  • But a problem is when you aren't afraid of the customer reselling what you have done, out of the box, which can be tracked by lawyers. The problem can be that the customer is refactoring it. I mean take my many hours of work and change a couple of things and call it his... Sell it for a small amount cheaper and win the business...

    That is why I am looking at technical solutions for protecting my work. It will also possibly help me to keep the invoicing fromo lawyers to a minimum, which is a substantial amount of change from having him/her to protect my work.

  • How can I be sure he doesn´t do that anyway?

    You can't prevent it...period. If anyone has the source there is no way to stop them...you can only then resort to punishing them if they do.

    Perhaps your contract, besides forbidding them from reselling it, has a price associated with them reselling it, i.e. something like 10x or 20x what you would normally pay, plus legal expenses if any required to get them to payup...that way, if they choose to do it anyway, you have a nice piece of paper, with their signature on it that already has a nice fat pre-agreed upon price that they need to pay should they go ahead and sell it.

  • Obfuscating the source is more trouble than it is worth, in my experience, unless you are trying to keep some complicated algorithm secret.

    I would suggest doing the following:

    1. Make sure you and your client and your lawyers all understand and agree with your contract.
    2. Insert a short copyright statement as a comment in every source file.
    3. Insert copyright notices into the generated web pages (via page templates or php code) as HTML comments, so a 'view source' will prove that your code is being in an unlicensed application.

    If you're really worried, and this isn't an intranet-only app, you might expand on (3) and insert unique hidden text into the pages that is seen by Google but not by users.

    None of this will stop a determined thief, but will help deter and detect "accidental" thefts.

  • I reckon the only way to be sure is to offer your product as a hosted solution so the client never has access to the code. If you build it with this goal in mind you can still have resellers and let them skin the system so it looks like their own product.

    This works well where I work, in theory customers can licence the code to run on their own infrastructure, but it is priced at such a level that only big companies are prepared to pay, and big companies are on the whole more concerned with legal niceties so are less likely to just run off with your work.

    People are very prepared happy to go with hosted solutions if the price is right, and it can have benefits for everyone. The customer doesn't have to worry about getting everything set up and they also have the security of knowing that if something does need tweaking we (the developers) are there to do it.

  • I haven't seen mention of Ioncube and so was wondering if there is a reason for not using it?

    Yes it costs money to set up and yes it requires a server side library to be installed (I daresay most hosts these days have it already running) but it does allow for domain restrictions as well as time based restrictions.

    Maybe you could even use it in conjunction with PHPAudit?

Programmatically load CSV file into Excel Worksheet (Delphi 7)

I have a large amount of data to insert into an worksheet of an existing Excel workbook. The Excel workbook will have other worksheets containing calculations and a pivot tables. The data may have as many as 60,000 rows and more than 30 columns. This solution must work for both Excel 2003 and Excel 2007.

Using the Excel OLE object is way too slow so we are attempting to load the data from a CSV file. We have come up with a method to load the data by placing the data onto the clipboard and then pasting it into the worksheet. I feel this is a quite a kludge. Is there another way to programmatically load a CSV file into a worksheet? Or perhaps a different solution altogether?


Update: We got slammed with another task before we could fully investigate the answers. We should be able to get back to this in a couple of weeks. I'll be sure to update again when we get back to this task.

Thanks for all of the answers to date!

From stackoverflow
  • Just a quick idea... Have you tried loading it from a Memory File?

  • you can load the csv into listview or usin OLEDB provider to load it on DBGrid, then export it into xls file format using TMxExport component from Max Components:

    Max Components

    mreith : Unfortunately this appears to overwrite the Excel workbook. We will have a pivot table as well as other formulas in an existing file.
  • Any chance you can drop the requirement for this to work with Office 2003? I would have recommended the Open XML Format SDK. It lets you bind managed code assemblies to spreadsheet documents that can handle events such as Open or Close, and read and write to cells in the document, among other things. Alternatively, you can use it to manipulate XSLX documents from an application. Quite slick, actually.

    Since this won't work for you, how about writing a macro that pulls in the CSV file when the spreadsheet is loaded?

    mreith : The requirements are for Office 2003 and up. This would have solved a number of other issues that we have as well. The macro might be a possibility; I'll have to see if we can run under reduced macro security.
  • Have you tried linking the csv file directly into the worksheet.

    Go to Data -> Import External Data -> Import Data change the file type to 'Text Files'

    You can then refresh the worksheet when the csv is update.

    NOTE: I have not done this with the volume of data you have indicated, so YMMV

    mreith : This is something that we will try. I'm thinking that the CSV would be required to be present with the workbook. I'm not sure if this is something that we can do but it's worth looking into.
  • Actually there is a way that is quite fast, pretty old tech (nowdays) but is probably the fastest.

    It's ADO or for earlier versions DAO (note not ADO.NET)

    You can read a CSV file using ADO and the JET Engine to get the data into a ADO recordset, then an Excel Range Object has a CopyFromRecordSet method that will copy (very fast) from the ADO (or DAO) recordset.

    http://msdn.microsoft.com/en-us/library/aa165427(office.10).aspx

    mreith : This is something we will definitely check into.
  • XLSReadWrite is a component that can read and write excel files from Delphi. It's fast and it has support for Excel 2003 and 2007. You can create new excel files as well as open existing ones and add/modify them.

    Also you do not need to have Excel installed to be able to use it.

    See http://www.axolot.com/components/xlsrwii20.htm

    mreith : This is definitely something that we will look into. Thanks.
  • You can try to use Tab Separated Values instead of CSV - than you just paste this into Excel :)

    mreith : We are already doing something very similar now. It seems a bit of a kludge to paste text into Excel from the clipboard. If there are no other viable choices we may do this.

Ajax call not responding on repeated request

I have a page with a dropdown. The onchange event calls a Javascript function (below) that includes an Ajax block that retrieves data and populates a TEXTAREA. On the surface, everything works.

I can select any item in the list with no problems. However, if I select an item that has previously been selected, the Ajax call appears to hang. It looks like maybe some weird caching issue or something. If I close the browser and reload the page, all items work again until I re-select.

I've tested for the readyState and status properties when it's hanging, but I get nothing. Am I missing something?

The page is a client project behind authentication so I can't post a URL, but here's the Ajax code. This is in a PHP page, but there's no PHP script related to this.

function getText( id ) {
var txt = document.getElementById( "MyText" );

txt.disabled = "disabled";
txt.innerText = "";
txt.className = "busy";

var oRequest = zXmlHttp.createRequest();
oRequest.open( "get", "get_text.php?id=" + id, true );
oRequest.send( null );

oRequest.onreadystatechange = function() {
 if( oRequest.readyState == 4 ) {
  if( oRequest.status == 200 ) {
   txt.innerText = oRequest.responseText;
  } else {
   txt.innerText = oRequest.status + ": " + oRequest.statusText; 
  }

  txt.disabled = "";
  txt.className = "";

  oRequest = null;
 }
}}

Edit: The code block seems a little quirky; it won't let me include the final } unless it's on the same line as the previous.

From stackoverflow
  • I would guess that you are running into a caching issue. I have noticed that Internet Explorer is more aggressive at caching ajax calls than Firefox is. One way to be sure of what is happening is to use Fiddler2. This application monitors your web traffic, and you would be able to see if the browser is making a request or not, and what cache headers are coming back on the responses that you do get.

    You can download fiddler2 from http://www.fiddlertool.com/fiddler/

  • You're setting the onreadystatechange function after you're sending the request. If it takes a long time (ie if it goes to the server), this will probably work, since there will be a delay before it tries to call the callback.

    If the page is cached, though, the browser is probably trying to call onreadystatechange immediately in the send method. Move your assignment to onreadystatechange to before the open/send code.

    annakata : +1: send should be the very last thing you do with your request
    Michael Itzoe : That fixed it; thanks.
  • HI, The caching is due to the same url thats being called repeatedly. If you change the URl dynamically then this issue can be rsolved. Something like by adding a querystring with the current time with the request ( or any random renerated number ) you can change the url without affecting the result

    Steve Massing : This technique has worked for me many times. Add something like &zz=SomeRandomNumber to the end of your url so the url is unique each time you call it. Then ignore the zz variable in your handler code.

Converting XML document for use with .NET 2.0 TreeView control

I have an XML file which is in the following format:

<root>
  <category>
    <doctype>
      <name>Doc1</name>
      <site>
        <name>Site1</name>
        <target>iframe</target>
        <url>http://www.gmail.com</url>
      </site>
    </doctype>
    <doctype>
      <name>Doc2</name>
      <site>
        <name>Site2</name>
        <target>iframe</target>
        <url>http://www.bbc.co.uk</url>
      </site>
    </doctype>
  </category>
</root>

I need to use it on a standard .net 2.0 TreeView control which requires the XML in the following format

<root>
  <category>  
    <doctype name="Doc1">
      <site name = "Site1" target = "iframe" url = "http://www.gmail.com">
      </site>
    </doctype>
    <doctype name="Doc2">
      <site name = "Site2" target = "iframe" url = "http://www.bbc.co.uk">
      </site>
    </doctype>
  </category>
</root>

The biggest complication is the fact that some child nodes of the DOCTYPE node need to be converted to attributes (i.e. NAME) while some stay as child nodes which require attributes of their own (i.e. SITE).

How can this be done using XSLT?

From stackoverflow
  • The following XSLT 1.0 transformation does what you intend.

    <xsl:stylesheet 
      version="1.0" 
      xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    
      <xsl:template match="root | category | doctype | site">
        <xsl:copy>
           <xsl:apply-templates select="*" />
        </xsl:copy>
      </xsl:template>
    
      <xsl:template match="name | target | url">
        <xsl:attribute name="{local-name()}">
          <xsl:value-of select="." />
        </xsl:attribute>
      </xsl:template>
    
    </xsl:stylesheet>
    

    Output:

    <root>
      <category>
        <doctype name="Doc1">
          <site name="Site1" target="iframe" url="http://www.gmail.com"></site>
        </doctype>
        <doctype name="Doc2">
          <site name="Site2" target="iframe" url="http://www.bbc.co.uk"></site>
        </doctype>
      </category>
    </root>
    
    eMTeeN : would apprecaite the simpler solution for the modified question. thanks

Perl, LibXML and Schemas

I have an example Perl script which I am trying to load and validate a file against a schema, them interrogate various nodes.

#!/usr/bin/env perl
use strict;
use warnings;
use XML::LibXML;

my $filename = 'source.xml';
my $xml_schema = XML::LibXML::Schema->new(location=>'library.xsd');
my $parser = XML::LibXML->new ();
my $doc = $parser->parse_file ($filename);

eval {
    $xml_schema->validate ($doc);
};

if ($@) {
    print "File failed validation: $@" if $@;
}

eval {
    print "Here\n";
    foreach my $book ($doc->findnodes('/library/book')) {
     my $title = $book->findnodes('./title');
     print $title->to_literal(), "\n";

    }
};

if ($@) {
    print "Problem parsing data : $@\n";
}

Unfortunately, although it is validating the XML file fine, it is not finding any $book items and therefore not printing out anything.

If I remove the schema from the XML file and the validation from the PL file then it works fine.

I am using the default namespace. If I change it to not use the default namespace (xmlns:lib="http://libs.domain.com" and prefix all items in the XML file with lib and change the XPath expressions to include the namespace prefix (/lib:library/lib:book) then it again works file.

Why? and what am I missing?

XML:

<?xml version="1.0" encoding="utf-8"?>
<library xmlns="http://lib.domain.com" 
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="http://lib.domain.com .\library.xsd">
    <book>
     <title>Perl Best Practices</title>
     <author>Damian Conway</author>
     <isbn>0596001738</isbn>
     <pages>542</pages>
     <image src="http://www.oreilly.com/catalog/covers/perlbp.s.gif" width="145" height="190"/>
    </book>
    <book>
     <title>Perl Cookbook, Second Edition</title>
     <author>Tom Christiansen</author>
     <author>Nathan Torkington</author>
     <isbn>0596003137</isbn>
     <pages>964</pages>
     <image src="http://www.oreilly.com/catalog/covers/perlckbk2.s.gif" width="145" height="190"/>
    </book>
    <book>
     <title>Guitar for Dummies</title>
     <author>Mark Phillips</author>
     <author>John Chappell</author>
     <isbn>076455106X</isbn>
     <pages>392</pages>
     <image src="http://media.wiley.com/product_data/coverImage/6X/07645510/076455106X.jpg" width="100" height="125"/>
    </book>
</library>

XSD:

<?xml version="1.0" encoding="utf-8"?>
<xs:schema xmlns="http://lib.domain.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://lib.domain.com">
    <xs:attributeGroup name="imagegroup">
     <xs:attribute name="src" type="xs:string"/>
     <xs:attribute name="width" type="xs:integer"/>
     <xs:attribute name="height" type="xs:integer"/>
    </xs:attributeGroup>
    <xs:element name="library">
     <xs:complexType>
      <xs:sequence>
       <xs:element maxOccurs="unbounded" name="book">
        <xs:complexType>
         <xs:sequence>
          <xs:element name="title" type="xs:string"/>
          <xs:element maxOccurs="unbounded" name="author" type="xs:string"/>
          <xs:element name="isbn" type="xs:string"/>
          <xs:element name="pages" type="xs:integer"/>
          <xs:element name="image">
           <xs:complexType>
            <xs:attributeGroup ref="imagegroup"/>
           </xs:complexType>
          </xs:element>
         </xs:sequence>
        </xs:complexType>
       </xs:element>
      </xs:sequence>
     </xs:complexType>
    </xs:element>
</xs:schema>
From stackoverflow
  • From the XML::LibXML docs:

    A common mistake about XPath is to assume that node tests consisting of an element name with no prefix match elements in the default namespace. This assumption is wrong - by XPath specification, such node tests can only match elements that are in no (i.e. null) namespace. ...(and later)... ...The recommended way is to use the XML::LibXML::XPathContext module

    So, from the perspective of XPath, there is no "default" namespace...for any non-null namespace, you have to specify it in your XPath. The XML::LibXML::XPathContext module lets you create a prefix for any namespace to use in your XPath expression.

Identifying whether a certificate is der encoded or base 64 encoded

HI, Is there a way by which i can know the encoding type of a certiicate file?

From stackoverflow
  • If you open the certificate up in notepad and it starts with -----BEGIN CERTIFICATE----- then it's Base64 encoded. if it's all goobledygook it's DER encoded.

How do you set a DateTimePicker to be read only?

I have a DateTimePicker (nullable version) that I need to be read only. I'm not happy with the display if it is disabled, so wanted to know if anyone had a nifty example of how to stop updates on the field?

Thanks.

From stackoverflow
  • handle the DateTimePicker.MouseClick Event and set the event.handled to true

    Rowland Shaw : You'd also need to disable keyboard input that would change the value
  • You could hook the Changed event, and set the value back to your desired value (if different) -- this way you'll cover any cause for change (via mouse, or keyboard)

    Have you considered using a different control, such as a read only textbox or even a label control?

  • "I'm not happy with the display if it is disabled"

    Why? If it's because a disabled text box looks weird, you can just change the disabled style to make it look normal, or indicate in a prettier way that it accepts input only through the date picker. Possibly have no borders on it, to say it's not really a text box.

  • How about just picking up the Changed event and setting e.Cancel = true?

How to select or highlight a block in emacs?

Hello all,

I want to select or highlight a block in emacs without mouse but from the keyboard like vim's visual mode. What is the easiest way to do this from a keyboard?

From stackoverflow
  • Take a look at region-rectangle in emacs.

    In short, you start selection like usual with C-spc, then kill region with C-x r k and paste/yank killed block with C-x r y.

    Allen : That doesn't kill regions, it kills rectangles, which is a lot different from what vim's visual mode does, in my experience.
  • Use C-Space to set a mark and move your cursor. The transient-mark-mode will highlight selections for you; M-x transient-mark-mode. You can setup emacs to enable this mode by default using a customization; M-x customize-option RET transient-mark-mode.

  • See the article: "Working with rectangular selections", especially the comments section. See also the section of CUA mode documentation titled "CUA rectangle support". There's also a nice video on vimeo.

  • If I understand the question correctly, it is not about rectangular regions originally.

    C-Spc puts a mark at the current position.

    Wherever your cursor is afterwards, the text between the last mark and the current position is "selected" (you can highlight this by activating transient-mark-mode, but this will also mean that marks have to be deleted when you don't want highlight).

    You can operate on that region with commands like:

    C-w . . Kill region. This deletes and puts the region into the kill ring.
    C-y . . Yank. This inserts the last snippet from the kill ring.
    M-y . . Cycle kill ring. Immediately after C-y, this replaces the yanked part by the other snippets in the kill ring.
    M-w . . Save region into kill ring. Like C-w, but doesn't delete.

    This is just the basic usage. Marks have other uses, too. I recommend the tutorial (C-h t).

error while truncating tables from oracle db

I am doing something like this in a procedure to clear all data from all tables in my database.

LOOP
    dbms_utility.exec_ddl_statement('alter table ' || c.owner || '.' || c.table_name || ' disable constraint ' || c.constraint_name);
  END LOOP;
 .
 .
 .

LOOP
    EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || t.table_name ;
  END LOOP;

Now , this throws the following error :

ORA-03291: Invalid truncate option - missing STORAGE keyword
ORA-06512: at "MYSCHEMA.CLEAR_DATA", line 15
ORA-06512: at line 2
Process exited.
Disconnecting from the database MYDB.
  1. Why is a storage keyword mandatory? I thought DROP STORAGE was the default.
  2. Even specifying storage close, as in,

    EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || t.table_name || 'DROP STORAGE';

    doesn't help. The error is the same.

  3. I thought it might have something to do with foreign constraints on some tables. Hence, the 'disable constraint' earlier in the script
From stackoverflow
  • I would suggest that you build the command you are executing in a string variable, output it using dbms_output, then execute it. This way you will see exactly what it is trying to execute that is generating the error.

    One thing that could be causing this error is if you have a table name with a space in it (yes, it's possible). The solution if that is the case is to wrap the table name in double quotes.

    dev> create table "dave exp" (x number);
    
    Table created.
    
    dev> truncate table dave exp;
    truncate table dave exp
                        *
    ERROR at line 1:
    ORA-03291: Invalid truncate option - missing STORAGE keyword
    
    dev> truncate table "dave exp";
    
    Table truncated.
    
  • Change your program:

    1. put your truncate command in a PL/SQL variable prior to execution
    2. add an exception handler that outputs the truncate statements via dbms_output or utl_file (fflush after each one) when you encounter an exception:
    LOOP 
      BEGIN
          ...
        v_sql := 'TRUNCATE TABLE ' || t.table_name ;
        EXECUTE IMMEDIATE v_sql;
      EXCEPTION
        WHEN OTHERS THEN
           dbms_output.put_line(SQLERRM);
           dbms_output.put_line(v_sql);
      END;
    END LOOP;
    

    This should show you the statement causing the issue.