Sunday, May 1, 2011

Aggregate multiple arrays into one array (Linq)

I'm having trouble aggregating multiple arrays into one "big array", I think this should be possible in Linq but I can't get my head around it :(

consider some method which returns an array of some dummyObjects

public class DummyObjectReceiver 
{
  public DummyObject[] GetDummyObjects  { -snip- }
}

now somewhere I have this:

public class Temp
{
  public List<DummyObjectReceiver> { get; set; }

  public DummyObject[] GetAllDummyObjects ()
  {
    //here's where I'm struggling (in linq) - no problem doing it using foreach'es... ;)
  }
}

hope it's somewhat clear what I'm trying to achieve (as extra I want to order this array by an int value the DummyObject has... - but the orderby should be no problem,... I hope ;)

From stackoverflow
  • You use the SelectMany method to flatten the list of array returning objects into an array.

    public class DummyObject {
     public string Name;
     public int Value;
    }
    
    public class DummyObjectReceiver  {
    
     public DummyObject[] GetDummyObjects()  {
      return new DummyObject[] {
       new DummyObject() { Name = "a", Value = 1 },
       new DummyObject() { Name = "b", Value = 2 }
      };
     }
    
    }
    
    public class Temp {
    
     public List<DummyObjectReceiver> Receivers { get; set; }
    
     public DummyObject[] GetAllDummyObjects() {
      return Receivers.SelectMany(r => r.GetDummyObjects()).OrderBy(d => d.Value).ToArray();
     }
    
    }
    

    Example:

    Temp temp = new Temp();
    temp.Receivers = new List<DummyObjectReceiver>();
    temp.Receivers.Add(new DummyObjectReceiver());
    temp.Receivers.Add(new DummyObjectReceiver());
    temp.Receivers.Add(new DummyObjectReceiver());
    
    DummyObject[] result = temp.GetAllDummyObjects();
    
    AnthonyWJones : +1. I missed the "multiple" aspect in my now deleted answer.
    Calamitous : exactly what I was looking for :) extra thanks for including orderby! (still can only +1)

Hibernate criteria _ how to use criteria to return only one element of an object instead the entire object

Hello,

I'm trying to get only the list of id of object bob for example instead of the list of bob. It's ok with a HQL request, but I would know if it's possible using criteria ?

An example :

final StringBuilder hql = new StringBuilder();
hql.append( "select bob.id from " )
    .append( bob.class.getName() ).append( " bob " )
    .append( "where bob.id > 10");

final Query query = session.createQuery( hql.toString() );
return query.list();
From stackoverflow

Loading an existing database into WWW SQL Designer?

I've used WWW SQL Designer several times to design databases for applications. I'm now in charge of working on an application with a lot of tables (100+ mysql tables) and I would love to be able to look at the relations between tables in a manner similar to what WWW SQL Designer provides. It seems that it comes with the provisions to hook up to a database and provide a diagram of its structure, but I've not yet been able to figure out exactly how one would do that.

From stackoverflow
  • Can you just export the sql query that builds your existing tables, and run that in WWW SQL Designer? Most database management software has that option...

  • Looking at the interface of the designer, I guess that when you run it on your own PHP/MySQL server, you should be able to import existing database with "Import from DB" button in Save/Load dialog.

  • You could use VISIO to import the database, it will diagram it for you.

  • btw, have you tried SchemaBank? They are web-based and support MySQL fairly well. It eats your sql dump and generates the tables and relationships for you.

  • http://code.google.com/p/database-diagram/

    This takes a SQL structure (SQL dump) and shows a diagram :)

    Paul Wicks : Very cool. Now it just needs to work with a few more sql types and do a better job of arranging the diagram

Crystal Reports - inconsistent formatting

We have a c# windows service generating reports from Crystal 11 RPT files.

This morning the service was restarted as normal, generated a couple of reports correctly then seems to have changed the line spacing in the headers of a table in one particular report, so the headers didn't fit correctly. The width of the text also changed, and some words wrapped where they would not normally wrap.

Some 20 reports were generated incorrectly then, roughly half an hour later, the reports went back to looking like normal.

Other RPT files were not affected.

The problem has not happened on previous days, so is not simply connected to the time.

Some of the reports had no rows in the table which was screwed up, so it's not simply a matter of data not fitting in the table either.

Can anyone help suggest an explanation for this, or is it just the kind of madness one expects from a product as hopeless as Crystal?

From stackoverflow
  • Did you change your default printer or other printer settings for the printer being used by the report? If the printer that is selected in the report for printing is not found, the report will print to the default printer. This may cause the page settings to change based on the sizes and fonts supported by the printer.

    LordSauce : Hi Huzefa - thanks for your reply - nothing was changed on the server generating reports.

ASP.NET MVC - jQuery Sortable

I have a list of menu items which can be sorted. I have the sort working which is based on this link.

However, I'm not sure how to save the order of the menu items to the database? I'm using nhibernate.

View Code

<h3>Sort Main Menus</h3>
<% using(Html.BeginForm()) { %>
    <p>You can drag the items into a different order</p>
    <p></p>
    <div id="items">
        <% foreach (var mainMenusList in ViewData.Model) 
           {%>
             <%Html.RenderPartial("MainMenuEditor", mainMenusList, new ViewDataDictionary(ViewData) { { "mainMenuName", "mainMenu" } });%>     
           <%} 
        %>
    </div>
    <input type="submit" value="Save changes" />
 <% } %>

 <script type="text/javascript">
    $(function() 
    {
        $("#items").sortable({ axis: "y" });
    });
</script>

MainMenuEditor Code

<div>
 <input type="hidden" name="<%= ViewData["mainMenuName"] + ".index" %>" value="<%= ViewData.Model.Id %>" />
 <% var fieldPrefix = string.Format("{0}[{1}].", ViewData["mainMenuName"], ViewData.Model.Id); %>
 <%= Html.Hidden(fieldPrefix + "MainMenuID", ViewData.Model.Id) %>
 <%= Html.TextBox(fieldPrefix + "Name", ViewData.Model.MainMenuName, new { size = "30"})%></div>
From stackoverflow
  • I think you need a <form>-tag, and submit that form you your controller. The controller needs to pass the data to the model, and the model will make that the data is saved in the database.

    Roslyn : Thanks. That's it working now.

Emacs ESS Mode - Tabbing for Comment Region

I am using the Emacs-Speaks-Statistics (ESS) mode for Emacs. When editing R code, any comment lines (those starting with #) automatically get tabbed to the far right when I create a new line above it. How should I change my .emacs.el file to fix this?

For example, I have:

# Comment

Now, after putting my cursor at the beginning of the line and pressing Enter, I get:

                                # Comment

Thanks for any hints.

From stackoverflow
  • Either

    (setq ess-fancy-comments nil)
    

    if you never want to indent single-# comments, or

    (add-hook 'ess-mode-hook 
              (lambda () 
                (local-set-key (kbd "RET") 'newline)))
    

    if you want to change the behavior of Enter so it doesn't indent.

    aL3xa : This is just sublime! Thanks!!!
    Martin Mächler : Rather I think you should use "#" for end-of-line comments, and these are nicely indented to the same column on purpose --> nice code "listing". For the other comments, really do get in to the habit of using "##" (much more than "###"): These indent as other "statements" within that block of code
  • Use '###' if you don't want the comments indented. According to the manual,

    By default, comments beginning with ‘###’ are aligned to the beginning of the line. Comments beginning with ‘##’ are aligned to the current level of indentation for the block containing the comment. Finally, comments beginning with ‘#’ are aligned to a column on the right (the 40th column by default, but this value is controlled by the variable comment-column,) or just after the expression on the line containing the comment if it extends beyond the indentation column.

Escaping HTML entities in the URL of a rails remote_function

Some content in my page is loaded dynamically with the use of this code :

javascript_tag( remote_function( :update => agenda_dom_id, :url => agenda_items_url(options), :method => :get ) )

When it outputs in the browser, it comes out as this :

new Ajax.Updater('agenda', 'http://localhost:3000/agenda_items?company=43841&amp;history=true', {asynchronous:true, evalScripts:true, method:'get'})

The & character in the URL is replaced by &amp; and so the second parameter of the request is discarded.

I made different tests and it looks as if Rails tries to make the HTML entities conversion as soon as it detects that the code is in a script tag. And trying to hardcode the link or the javascript tag didn`t change anything.

Anybody encountered this problem before?

From stackoverflow
  • All javascript characters are escaped (see the source of remote_function). That has some consequences. However in your case I don't see any problem, I have similar cases where this just works.

    Can you describe the problem you have with it?

    PS. I have posted I lighthouse ticket because I have a case where I need to insert javascript: https://rails.lighthouseapp.com/projects/8994/tickets/2500-remote_function-does-not-allow-dynamically-generation-of-url#ticket-2500-2

  • The problem is with the URL that gets generated :

    http://localhost:3000/agenda_items?company=43841&amp;history=true
    

    The history parameter won't get sent correctly since the & character is replaced by a &amp;.

    The funny thing is that when I try it with a link_to_remote instead of the remote_link or when I output the remote_function directly on the page (and not in a script tag), it works as expected and doesn't escape the & character with its HTML entity.

    I'm on Rails 2.1.1 and Firefox. Maybe it has been fixed in the latest version of Rails but switching is not an option right now.

  • I'm on Rails 2.3.2 and I don't have any problem when & is replaced by & amp;

    If you need to fix this in your situation, you could patch the remote_ function update and add a :escape_ url option and set that to false. Put the code below somewhere in your rails environment where it gets loaded.

    module ActionView
    class Base
     def remote_function(options)
      javascript_options = options_for_ajax(options)
    
      update = ''
      if options[:update] && options[:update].is_a?(Hash)
        update  = []
        update << "success:'#{options[:update][:success]}'" if options[:update][:success]
        update << "failure:'#{options[:update][:failure]}'" if options[:update][:failure]
        update  = '{' + update.join(',') + '}'
      elsif options[:update]
        update << "'#{options[:update]}'"
      end
    
      function = update.empty? ?
        "new Ajax.Request(" :
        "new Ajax.Updater(#{update}, "
    
      url_options = options[:url]
      url_options = url_options.merge(:escape => false) if url_options.is_a?(Hash)
      function << (options[:escape_url] == false ? "'#{url_for(url_options)}'" : "'#{escape_javascript(url_for(url_options))}'") ## Add this line to the rails core
      function << ", #{javascript_options})"
    
      function = "#{options[:before]}; #{function}" if options[:before]
      function = "#{function}; #{options[:after]}"  if options[:after]
      function = "if (#{options[:condition]}) { #{function}; }" if options[:condition]
      function = "if (confirm('#{escape_javascript(options[:confirm])}')) { #{function}; }" if options[:confirm]
    
      return function
    end 
    end
    

    end

  • Use the :with option (which must be a valid query string and is not escaped like the :url), like so

    url, query_string = agenda_items_url(options).split('?')
    javascript_tag( remote_function( :update => agenda_dom_id, :url => url, :with => query_string, :method => :get ) )
    

    I'm assuming agenda_items_url is your own helper function and it is outputting the full url without escaping it first.

  • Guys, it's just a default behavior of url_for in views... C'mon, pass :escape => false along with URL params and enjoy unescaped stuff :)

Writing into excel file with OLEDB

Does anyone know how to write to an excel file (.xls) via OLEDB in C#? I'm doing the following:

   OleDbCommand dbCmd = new OleDbCommand("CREATE TABLE [test$] (...)", connection);
   dbCmd.CommandTimeout = mTimeout;
   results = dbCmd.ExecuteNonQuery();

But I get an OleDbException thrown with message:

"Cannot modify the design of table 'test$'. It is in a read-only database."

My connection seems fine and I can select data fine but I can't seem to insert data into the excel file, does anyone know how I get read/write access to the excel file via OLEDB?

From stackoverflow
  • A couple questions:

    • Does the user that executes your app (you?) have permission to write to the file?
    • Is the file read-only?
    • What is your connection string?

    If you're using ASP, you'll need to add the IUSER_* user as in this example.

    • How do I check the permissions for writing to an excel file for my application (I'm using excel 2007)?
    • The file is not read only, or protected (to my knowledge).
    • My connection String is:

    "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=fifa_ng_db.xls;Mode=ReadWrite;Extended Properties=\"Excel 8.0;HDR=Yes;IMEX=1\""

    Michael Haren : Check the permissions by right-clicking on it and hitting the security or permissions tab. Is the Excel file closed when you connect to it?
    rohancragg : You should try IMEX=0 instead
  • You need to add "ReadOnly=False;" to your connection string

    Provider=Microsoft.Jet.OLEDB.4.0;Data Source=fifa_ng_db.xls;Mode=ReadWrite;ReadOnly=false;Extended Properties=\"Excel 8.0;HDR=Yes;IMEX=1\";
    
    rohancragg : Not quite correct - using the ReadOnly attribute causes the error "System.Data.OleDb.OleDbException : Could not find installable ISAM." It is the IMEX=0 that prevents the file being readonly. The string that worked for me (C#) is: @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Mode=ReadWrite;Extended Properties=""Excel 8.0;HDR=YES;MaxScanRows=0;IMEX=0"";";
  • Further to Michael Haren's answer. The account you will need to grant Modify permissions to the XLS file will likely be NETWORK SERVICE if this code is running in an ASP.NET application (it's specified in the IIS Application Pool). To find out exactly what account your code is running as, you can do a simple:

    Response.Write(Environment.UserDomainName + "\\" + Environment.UserName);
    
  • I was also looking for and answer but Zorantula's solution didn't work for me. I found the solution on http://www.cnblogs.com/zwwon/archive/2009/01/09/1372262.html

    I removed the "ReadOnly=false" parameter and the "IMEX=1" extended property.

    The IMEX=1 property opens the workbook in import mode, so structure-modifying commands (like CREATE TABLE or DROP TABLE) don't work.

    My working connection string is:

    "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=workbook.xls;Mode=ReadWrite;Extended Properties=\"Excel 8.0;HDR=Yes;\";"
    
  • try thsese links

    http://csharp.net-informations.com/excel/csharp-excel-oledb.htm
    http://csharp.net-informations.com/excel/csharp-excel-oledb-insert.htm
    

    bolton.

  • I was running under ASP.NET, and encountered both "Cannot modify the design..." and "Cannot locate ISAM..." error messages.

    I found that I needed to:

    a) use the following connection string:

    "Provider=Microsoft.Jet.OLEDB.4.0;Mode=ReadWrite;Extended Properties='Excel 8.0;HDR=Yes;';Data Source=" + {path to file};

    Note I too had issues with IMEX=1 and with the ReadOnly=false attributes in the connection string.

    b) grant EVERYONE full permissions to the folder in which the file was being written. Normally, ASP.NET runs under the NETWORK SERVICE account, and that already had permissions. However, the OleDb code is unmanaged, so it must run under some other security context. (I am currently too lazy to figure out which account, so I just used EVERYONE.)

Differences between 'Add web site/solution to source control...'

I have opened a website website hosted on my workstation in Visual Studio 2008 and saved it as solution. I now want to add this to source contol and I am being given the option to either 'Add solution to source control...' or 'Add web site to source control...'.

This solution needs to be accessed, worked on and run locally by several other developers so I was wondering what the key differences are between each option and which would be the best to choose?

From stackoverflow
  • A solution is a project aggregator.

    • If you have several projects on your solution and you want your colleges to work with all the same way you do, them you must add the solution.

    • If you have several projects on your solution and you do NOT want your colleges to work with all the same way you do, them you must NOT add the solution.

    • If you only have one project it is really the same, but I would add the solution so if at a latter time someone adds a project to it I would have immediate access to it.

Refresh Page In wpf

  1. How to close or cancel the pop up window in Button click.?

  2. When i update the data in pop up window, the same time how to refresh the datagrid in main page.?

Thanking u

From stackoverflow
    1. IsOpen property of popup should be set false.
    2. Use Binding.

ASP.NET Webpart Static Connections.

I have two webparts in two different webpart zones. They provide a Master/Details scenario using gridviews. They are defined using static connections. Initially this works great.

As soon as I close one of the webparts I get the message "You are about to close the webpart. It is currently providing data to other webparts, and these connections will be deleted if this webpart is closed. Click OK to continue.

This in itself is fine so I click close and my part closes. However when I open the catalog zone and re-add the webpart (which gets added fine) the connection between the parts is broken (as described by the message).

However my webpart connection in my HTML is still visible. I can only assume it uses the ASPNET membership or other to remember the ID of the connection and not to enable it.

My question is how do I re-enable the connection in code or other!?

Thanks.

From stackoverflow
  • OK I have solved my issue. I added the following into WebpartManager.WebpartAdded()

        '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
        Dim wp1 As WebPart = zoneDiaryTopLeft.WebParts("Ucl_Diary_Summary1")
        Dim wp2 As WebPart = zoneDiaryTopRight.WebParts("Ucl_DiaryAwaitingReview1")
    
        Dim providerConnectionPoint As ProviderConnectionPoint = _
        WebPartManager1.GetProviderConnectionPoints(wp1)("IMessageProvider")
    
        Dim consumerConnectionPoint As ConsumerConnectionPoint = _
        WebPartManager1.GetConsumerConnectionPoints(wp2)("IMessageConsumer")
    
        Dim returnValue As WebPartConnection
        returnValue = WebPartManager1.ConnectWebParts(wp1, providerConnectionPoint, wp2, consumerConnectionPoint)
        '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
    

    All said, how does it know/store the connection that was removed and remember to NOT allow that to be active!? It would be much easier if I could stop the connection being removed or re-enable it. I know dynamic connections are an option but I dont want users having this ability as they having a hard enough job understanding the fact you can drag a webpart around the screen. Connections are rocketscience to them.

  • Master!

    Congratulations man, this is a really valuable information and an easy to understand guide.

    Really thanks.

Are middleware apps required to do business logic?

Let's suppose I have a large middleware infrastructure mediating requests between several business components (customer applications, network, payments, etc). The middleware stack is responsible for orchestration, routing, transformation and other stuff (similar to the Enterprise Integration Patterns book by Gregor Hohpe).

My question is: is it good design to put some business logic on the middleware?

Let's say my app A requests some customer data from the middleware. But in order to get this data, I have to supply customer id and some other parameter. The fetching of this parameter should be done by the requesting app or is the middleware responsible for 'facilitating' and providing an interface that receives customer ids and internally fetches the other parameter?

I realize this is not a simple question (because of the definition of business logic), but I was wondering if it is a general approach or some guidelines.

From stackoverflow
  • The middleware application should do it. System A should have no idea that the other parameter exists, and will certainly have no idea about how to get it.

  • This is the "Composite Application" pattern; the heart of a Service Oriented Architecture. That's what the ESB vendors are selling: a way to put additional business logic somewhere that creates a composite application out of existing applications.

    This is not simple because your composite application is not just routing. It's a proper new composite transaction layered on top of the routing.

    Hint. Look at getting a good ESB before going too much further. This rapidly gets out of control and having some additional support is helpful. Even if you don't buy something like Sun's JCAPS or Open ESB, you'll be happy you learned what it does and how they organize complex composite applications.

    Miguel Ping : The difficult part is accountability: who should be responsible for what. I believe this is essencialy a cultural problem. BTW, my question was purely hypothetical, the project where I am is already almost fully finished, and we're using a nice middleware product.
    S.Lott : The difficult part is called "governance" and includes accountability as well as change control and technical standards. It's not "cultural" -- it's bigger than that -- and it's usually very hard.
  • Orchestration, Routing and Transformation.

    You don't do any of these for technical reasons, at random, or just for fun, you do these because you have some business requirement -- ergo there is business logic involved.

    The only thing you are missing for a complete business system is calculation and reporting (let us assume you already have security in place!).

    Except for very low level networking, OS and storage issues almost everything that comprises a computer system is there because the business/government/end users wants it to be there.

    The choice of 'Business Logic' as terminoligy was very poor and has led to endless distortions of design and architecture.

    What most good designers/architects mean by business logic is calculation and analysis.

    If you "%s/Business Logic/Calculation/g" most of the architectural edicts make more sense.

Performance Tuning PostgreSQL

Keep in mind that I am a rookie in the world of sql/databases.

I am inserting/updating thousands of objects every second. Those objects are actively being queried for at multiple second intervals.

What are some basic things I should do to performance tune my (postgres) database?

From stackoverflow
  • The absolute minimum I'll recommend is the EXPLAIN ANALYZE command. It will show a breakdown of subqueries, joins, et al., all the time showing the actual amount of time consumed in the operation. It will also alert you to sequential scans and other nasty trouble.

    It is the best way to start.

  • First and foremost, read the official manual's Performance Tips.

    Running EXPLAIN on all your queries and understanding its output will let you know if your queries are as fast as they could be, and if you should be adding indexes.

    Once you've done that, I'd suggest reading over the Server Configuration part of the manual. There are many options which can be fine-tuned to further enhance performance. Make sure to understand the options you're setting though, since they could just as easily hinder performance if they're set incorrectly.

    Remember that every time you change a query or an option, test and benchmark so that you know the effects of each change.

  • It's a broad topic, so here's lots of stuff for you to read up on.

    • EXPLAIN and EXPLAIN ANALYZE is extremely useful for understanding what's going on in your db-engine
    • Make sure relevant columns are indexed
    • Make sure irrelevant columns are not indexed (insert/update-performance can go down the drain if too many indexes must be updated)
    • Make sure your postgres.conf is tuned properly
    • Know what work_mem is, and how it affects your queries (mostly useful for larger queries)
    • Make sure your database is properly normalized
    • VACUUM for clearing out old data
    • ANALYZE for updating statistics (statistics target for amount of statistics)
    • Persistent connections (you could use a connection manager like pgpool or pgbouncer)
    • Understand how queries are constructed (joins, sub-selects, cursors)
    • Caching of data (i.e. memcached) is an option

    And when you've exhausted those options: add more memory, faster disk-subsystem etc. Hardware matters, especially on larger datasets.

    And of course, read all the other threads on postgres/databases. :)

  • http://wiki.postgresql.org/wiki/Performance_Optimization

  • Put fsync = off in your posgresql.conf, if you trust your filesystem, otherwise each postgresql operation will be imediately written to the disk (with fsync system call). We have this option turned off on many production servers since quite 10 years, and we never had data corruptions.

    : This is BAD advice. You risk corrupting your data. Of course you might get lucky for some years, as you have. The same gain can be had by using a raid-controller with a battery-backed write cache - no additional risk.
    fredz : We trust our ext3 filesystems. A write cache is limited. For example, we maintain the Century21 France database since 8 years ; more than 3000 persons are writing to this database in real time. We have a home-made middleware to mirror all queries in an another database in case of server crash, but we never had any problem. See : http://www.postgresql.org/docs/8.1/interactive/runtime-config-wal.html
  • Actually there are some simple rules which will get you in most cases enough performance:

    1. Indices are the first part. Primary keys are automatically indexed. I recommend to put indices on all foreign keys. Further put indices on all columns which are frequently queried, if there are heavily used queries on a table where more than one column is queried, put an index on those columns together.

    2. Memory settings in your postgresql installation. Set following parameters higher:

    .

    shared_buffers, work_mem, maintenance_work_mem, temp_buffers
    

    If it is a dedicated database machine you can easily set the first 3 of these to half the ram (just be carefull under linux with shared buffers, maybe you have to adjust the shmmax parameter), in any other cases it depends on how much ram you would like to give to postgresql.

    http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html

    Grasper : PKs are auto-indexed? How come they do not show up under the "indexes" list in the pgAdmin tool?

svn changelists: how to limit operations to "default" changelist?

Subversion 1.5 introduced changelists and I wanted to use this feature to group a change for later and continue to work on other files. The problem is that the subversion commands like svn diff and svn commit work on all modified files. I can limit the files they operate on if I specify a changelist with the --changelist option. But how can I limit the operations to files that are in no changelist at all?

For example: file1 and file2 are both modified. file1 is in the changelist A and file2 is in no changelist.

If I do svn diff --changelist A I see the diff for file1.

But if I do svn diff I see the diff for file1 and file2.

How do I manage to see a diff of just file2, i.e. of all the files that are not part of a changelist?

I am using Subversion 1.6 (in case this makes a difference).

From stackoverflow
  • Sorry, subversion doesn't have a syntax to express that yet.

    There are some ideas to allow a --changelist "" syntax, but that isn't implemented yet.

  • If you're interested in doing this in a graphical environment, SmartSVN (cross-platform) can do this for you. Their free version is quite full-featured, too.

    David Kemp : But then, so does tortoiseSVN

Easiest way to implement an online order tracking database

I've been asked by a client to make an online tracking system for work we do for them (we will be typesetting a high volume of books for this client). Basically, it would be a database showing the books we are currently working on, with information on what stage of the project we are at, and estimated completion dates. The only people with access to this system would be us and employees of the client company.

I've worked in MySQL and PHP before; should I just go with what I know? This answer to a similar question suggests using Google Apps. I don't have any experience with Python, but happy to learn...

From stackoverflow
  • You're the only one using this therefore I see no reason to use Google Apps. I'm usually weary of people suggesting Google Apps, Amazon's s3, Microsoft Azure, etc. Also, you're going to be using a radically different data store. Unless you want an excuse to learn to do Google Apps and Python, I'd say go with MySQL+PHP and be done with it! In short, there aren't really any technical reasons for you to go with Google Apps here.

  • Sticking with what you know is always a good solution when dealing with delivering products to customers. No customer likes to be your guinea pig while you learn a new technology, although that's often how it's done. If you are comfortable with MySQL and PHP then stick with it if it satisfies your requirements, if it seems not to then look for libraries, frameworks and components written in PHP that might help you reach that goal. If you still have difficulties (unlikely given the scope of the project given) then ask questions here :) & search the web for solutions and patterns.

    If all that fails and you can clearly solve your problem with another technology, then look at moving but make sure your customer is aware of how that's going to affect you timeframes.

    When you've implemented this project and have some spare time, if there's a new direction you'd like to explore then use this project as your base and set to work without the stress of a deadline.

    That's my 2p worth... good luck!

  • Well, as everyone has already said, if you already know PHP, that's got to be awfully tempting.

    But it sounds simple enough that something like Django might save you a lot of time: its built-in admin interface could be used for the "update" side of the job, so all you'd need to template up is the "read" side, which is pretty easy.

  • When developing a CRUD application such as this, you may be required to reinvent the wheel a little if starting from scratch. Many parts of your project are not unique to the project. E.g authentication, database access, form manipulation etc.

    If getting things done is important to you it may be important to give your project a kick start and stop you wasting too much time.

    Use a coding framework

    Frameworks often have a lot of functionality ready for use straight out of the box. Options may include Django, Ruby on Rails, Joomla, CakePHP, CodeIgniter.

    Hack a tried and tested application

    Open source projects are often quite easy to mould to your needs. Drupal and Joomla are CMS products which can be used in a wide variety of ways. If your book-tracking drupal module is any good, maybe you could go on to offer it as an open source plugin?

    Use a currently available app in a new way

    Your app seems to be tracking the status of items added to a database. How about using software designed for tracking other types of items. E.g. bug tracking software, project management to-do list software or customer relationship management software?

    Sharkey : Actually, your last point there is a really good one. Redmine in particular lets you edit workflows and roles very easily, so it'd be easy herd it into shape as a job tracker I'd think.
  • I suggest you too look at Viravis.

How to work with settings spanning over multiple Solutions and Projects in VS 2008 and .NET

Hi,

I'm not quite sure how .NET and C# 3.5 handles the settings of applications spanning over multiple projects as well as multiple solutions. Maybe someone can help me clear things up.

I've got 2 solutions, both containing several projects. Some of those projects contain a Setttings.settings file under the Properties folder, containing specific configuration variables required by the source files in this project.

Something like

  1. JobManager Solution
    • Manager.Core (with settings file)
    • Manager.UserInterface (with settings file)
    • Manager.Extension
  2. Importer Solution
    • Importer (with settings file)
    • Service (with settings file)

As can be seen, Manager.Core contains its own configuration file to store database connection information and other stuff, whereas the Importer contains its own configuration files storing the paths to the import directories to know where to get the files it needs to import into the database using Manager.Core to do so. (that's what Manager.Core is there for, it contains all the queries and inserts to work with the DB)

Service, on the other hand, is a Windows Service which uses the Importer and let's it run every hour or so, containing its own configuration settings for error logging paths.

Now when I compile the Service, there is only 1 configuration file called Service.exe.config, containing only the configuration parameters specified in the Service project. My first approach was to duplicate every setting entry of Manager.Core and Importer in Service.exe.config. But testing showed that, somehow, the parameters of the Importer are present and used.

Where are the settings for Manager.Core and Importer stored when they are not present in Service.exe.config?

Are the settings of Manager.Core present, too, meaning it's unnecessary to duplicate the entries of those configuration settings in the Service settings file?

Kind regards, Michael

From stackoverflow
  • Not used these but could you replace them with links to a single file in the solutions?

    Michael Barth : Could do that, though I would prefer to utilize what's already there. It's not like it isn't working, I just don't understand how it works exactly. ;)
  • Settings defaults are normally generated into code which is then compiled into the resulting dll

    They have a setting a CustomTool property of 'SettingsSingleFileGenerator'

    For a Setting file called Foo.settings containing a single value 'MyName' with scope Application of type string with value "Shuggy" in namespace Company.Properties (the default location for a project called 'Company') if you looked at the dll in reflector you would find a class in the Company.Properties namespace looking like this:

    [GeneratedCode(
    "Microsoft.VisualStudio.Editors.SettingsDesigner.SettingsSingleFileGenerator"
    ,"9.0.0.0")] 
    [CompilerGenerated]
    internal sealed class Foo : ApplicationSettingsBase
    {
        private static Settings defaultInstance = 
            ((Settings) SettingsBase.Synchronized(new Settings()));
    
        public static Settings Default
        {
            get { return defaultInstance; }
        }
    
    
        [ApplicationScopedSetting]
        [DefaultSettingValue("Shuggy")]
        [DebuggerNonUserCode]
        public string MyName
        {
            get
            {
                return (string) this["MyName"];
            }
        }
    }
    

    This is how settings structure and default values are persisted within the dlls which they are relevant to. The actual values are read from various config files depending on the scope (and possibly what programmatic changes the app decides to do)

    For a higher level view on how these are intended to be used see this article

    Michael Barth : Does this mean, when I have external DLLs included in an application, that I can't change their application settings without recompiling the external DLL?
    ShuggyCoUk : The settings can be changed via the standard user level or application level files, I do not believe user settings can be changed from the foo.exe.config file but the Application scoped ones can be See http://msdn.microsoft.com/en-us/library/aa730869(VS.80).aspx for the overview of how these settings things work.
    Michael Barth : Thank you for the answer!

How to tell Bazaar that a file is binary

This is to avoid having some <<< or some >>> in that file if there are conflicts.

If there is a conflict, I just want a message telling me there is a conflict and bazaar should not mess with the file.

With subversion, you can modify the svn:mime-type property. But I don't know if Bazaar have this feature.

From stackoverflow
  • The reference says that there is no explicit way of telling so far:

    Bazaar currently relies of [sic] content analysis to detect binary files for commands like diff. In the future, a binary = true rule may be added but it is not supported yet.

  • It is annoying, but you still have the .BASE, .OTHER and .THIS files which are unchanged, you just replace the altered file with the one you need.
    Can be scripted, I suppose.

ASP.NET TRANSACTIONS

I have some piece of code in my Website which adds the details into tables. Now I want that if some error occurs the previously entered data should be removed. How can I implement it with the help of transactions??

From stackoverflow
  • We need more details such as what database you are using and so forth. Basically you do this:

    Create a transaction

    Do some work in the context of that transaction

    Do some more work in the context of that transaction

    ...

    If an exception occurred, rollback the transaction, else commit.

    Roshan : I am using SQL Server 2005 database. I am writung the code in C#
    Sam Saffron : Look at SqlTransaction

C# Generics question

I am a bit rusty on generics, trying to do the following, but the compiler complains:

protected List<T> PopulateCollection(DataTable dt) where T: BusinessBase
{
    List<T> lst = new List<T>();
    foreach (DataRow dr in dt.Rows)
    {
        T t = new T(dr);
        lst.Add(t);
    }
    return lst;
}

So as you can see, i am trying to dump contents of a Table into an object (via passing a DataRow to the constructor) and then add the object to collection. it complains that T is not a type or namespace it knows about and that I can't use where on a non-generic declaration.

Is this not possible?

From stackoverflow
  • You probably need to add the new generic constraint on T, as follows:

    protected List<T> PopulateCollection<T>(DataTable dt) where T : BusinessBase, new()
    ...
    

    I can't pass a DataRow into the constructor, but you can solve that by assigning it to a property of BusinessBase

  • where T: BusinessBase
    

    Should have have restriction of new() I think added

  • There are two big problems:

    • You can't specify a constructor constraint which takes a parameter
    • Your method isn't currently generic - it should be PopulateCollection<T> instead of PopulateCollection.

    You've already got a constraint that T : BusinessBase, so to get round the first problem I suggest you add an abstract (or virtual) method in BusinessBase:

    public abstract void PopulateFrom(DataRow dr);
    

    Also add a parameterless constructor constraint to T.

    Your method can then become:

    protected List<T> PopulateCollection(DataTable dt)
        where T: BusinessBase, new()
    {
        List<T> lst = new List<T>();
        foreach (DataRow dr in dt.Rows)
        {
            T t = new T();
            t.PopulateFrom(dr);
            lst.Add(t);
        }
        return lst;
    }
    

    If you're using .NET 3.5, you can make this slightly simpler using the extension method in DataTableExtensions:

    protected List<T> PopulateCollection<T>(DataTable dt)
        where T: BusinessBase, new()
    {
        return dt.AsEnumerable().Select(dr => 
        { 
            T t = new T();
            t.PopulateFrom(dr);
        }.ToList();
    }
    

    Alternatively, you could make it an extension method itself (again, assuming .NET 3.5) and pass in a function to return instances:

    static List<T> ToList<T>(this DataTable dt, Func<DataRow dr, T> selector)
        where T: BusinessBase
    {
        return dt.AsEnumerable().Select(selector).ToList();
    }
    

    Your callers would then write:

    table.ToList(row => new Whatever(row));
    

    This assumes you go back to having a constructor taking a DataRow. This has the benefit of allowing you to write immutable classes (and ones which don't have a parameterless constructor) it but does mean you can't work generically without also having the "factory" function.

    eglasius : +1 clear on the issues, and for the last version. I don't think the intermediate version is much simpler than the foreach in this case.
    AngryHacker : I don't have the power to edit, so anyone who can, change return dt.Rows.AsEnumerable().Select(selector).ToList(); to return dt.AsEnumerable().Select(selector).ToList(); since AsEnumerable is an extention method on the DataTable not on the .Rows collection.
    Jon Skeet : @AngryHacker: Thanks, done.
  • The only constraint you can specify which allows for creation of new instances is new() - basically, a parameterless constructor. To circumvent this do either:

    interface ISupportInitializeFromDataRow
    {
        void InitializeFromDataRow(DataRow dataRow);
    }
    
    protected List<T> PopulateCollection<T>(DataTable dt) 
        where T : BusinessBase, ISupportInitializeFromDataRow, new()
    {
        List<T> lst = new List<T>();
        foreach (DataRow dr in dt.Rows)
        {
            T t = new T();
            t.InitializeFromDataRow(dr);
    
            lst.Add(t);
        }
        return lst;
    }
    

    Or

    protected List<T> PopulateCollection<T>(DataTable dt, Func<DataRow, T> builder) 
        where T : BusinessBase
    {
        List<T> lst = new List<T>();
        foreach (DataRow dr in dt.Rows)
        {
            T t = builder(dr);        
            lst.Add(t);
        }
        return lst;
    }
    
  • A possible way is:

    protected List<T> PopulateCollection<T>(DataTable dt) where T: BusinessBase, new()
        {
            List<T> lst = new List<T>();
            foreach (DataRow dr in dt.Rows)
            {
                T t = new T();
                t.DataRow = dr;
                lst.Add(t);
            }
            return lst;
        }
    
    Simon : public class BusinessBase{ public DataRow DataRow { get; set; }}
  • It is possible. I have exactly the same thing in my framework. I had exactly the same problem as you and this is how I solved it. Posting relevant snippets from the framework. If I remember correclty, the biggest problem was requirement to call parameterless constructor.

     public class Book<APClass> : Book where APClass : APBase
            private DataTable Table ; // data
            public override IEnumerator GetEnumerator()
            {                        
                for (position = 0;  position < Table.Rows.Count;  position++)           
                     yield return APBase.NewFromRow<APClass>(Table.Rows[position], this.IsOffline);
            }
       ...
    
    
      public class APBase ...
      {
        ...
        internal static T NewFromRow<T>(DataRow dr, bool offline) where T : APBase
            {
    
                Type t = typeof(T);
                ConstructorInfo ci;
    
                if (!ciDict.ContainsKey(t))
                {
                    ci = t.GetConstructor(new Type[1] { typeof(DataRow) });
                    ciDict.Add(t, ci);
                }
                else ci = ciDict[t];
    
                T result = (T)ci.Invoke(new Object[] { dr });
    
                if (offline)
                    result.drCache = dr;    
    
                return result;
            }
    

    In this scenario, base class has static method to instantiate objects of its derived classes using constructor that accepts tablerow.

How do you validate the data in your ViewModel in the MVVM pattern?

I listened to this herding code podcast on MVC, MVP, MVVM yesterday and was struck by the idea of sending your whole ViewModel object to a validator which does nothing but validate all the fields in it and send it back.

  • has anyone implemented that type of validation pattern?
  • how did it look technically?

I am thinking of extending this idea by also having a "FormPreparer" which receives the whole ViewModel after the Model data, field metadata, and other user and context data is fed into it, then this "FormPreparer" prepares all the fields on the form which will be on the View so that e.g.

  • date fields are represented by DatePicker controls
  • e-mail fields are represented by textBoxes with e-mail validation
  • and e.g. the Customer field is a dropdown of customers

the metadata defines these things about each field:

  • type (text, date, date/time, duration, email, url, customer)
  • control (textbox, multiline textbox, dropdown, radiobuttons, checkbox, clickbutton)
  • label (e.g. "First Name")
  • helptext (e.g. "this is the number you find on the top of Form 4A")
  • example ("#123ABCD")
  • display tab (e.g. for forms that consist of a number of tab areas)
  • display area (e.g. for forms that group fields into areas)
  • display order (e.g. the order of the fields in the group)
  • value (e.g. "Jim")
  • autosuggest data (an array of names which needs to be displayed when the user begins to type)
  • field status (readonly, edit, hide)

the "FormPreparer" would combine all this information and then present data to the View which:

  • shows all form data in appropriate controls (dates as datepickers, descriptions in multiline textboxes, etc.)
  • takes care of all validation automatically
  • would only display fields which the current user is allowed to see and would only let him edit data which he is allowed to edit
  • etc.

Has anyone programmed a WPF/MVVM application along these lines?

From stackoverflow
  • No. I'm working on a WPF/MVVM project, but we have not taken such a generic approach to validation. We are creating a custom validation method in each view model with validation logic specific to each view model.

    A generic validation routine that could be used for all view models would be great.

MySQL : selecting the X smallest values

Hi,

Let be a table like this :

CREATE TABLE `amoreAgentTST01` (
  `moname` char(64) NOT NULL DEFAULT '',
  `updatetime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `data` longblob,
  PRIMARY KEY (`moname`,`updatetime`)

I have a query to find the oldest records for each distinct 'moname', but only if there are multiple records for this 'moname' :

SELECT moname, updatetime FROM amoreAgentTST01 a 
WHERE (SELECT count(*) FROM amoreAgentTST01 x WHERE x.moname = a.moname) > 1 
  AND a.updatetime = (SELECT min(updatetime) FROM amoreAgentTST01 y WHERE y.moname = a.moname) ;

My question is : how to do the same but selecting the X oldest values ? I now simply run this, delete the oldest values and rerun it... which is not so nice.

Seconds question is : what do you think of the above query ? can it be improved ? is there any obvious bad practice ?

Thank you in advance for your advices and help.

Barth

From stackoverflow
  • Would something like this work (untested):

    SELECT moname, MIN(updatetime) FROM amoreAgentTST01 
    GROUP BY moname HAVING COUNT(moname)>1
    

    Edit - the above is meant only as a replacement for your existing code, so it doesn't directly answer your question.

    I think something like this should work for your main question:

    SELECT moname, updatetime FROM amoreAgentTST01 
    GROUP BY moname, updatetime 
    HAVING COUNT(moname)>1 
    ORDER BY updatetime LIMIT 0, 10
    

    Edit - sorry, the above won't work because it's returning only 10 records for all the monames - rather than the 10 oldest for each. Let me have a think.

    One more go at this (admittedly, this one looks a bit convoluted):

    SELECT a.moname, a.updatetime FROM amoreAgentTST01 a
    WHERE EXISTS 
    (SELECT * FROM amoreAgentTST01 b 
    WHERE a.moname = b.moname AND a.updatetime = b.updatetime  
    ORDER BY b.updatetime LIMIT 0, 10)
    AND (SELECT COUNT(*) FROM amoreAgentTST01 x WHERE x.moname = a.moname) > 1
    

    I should add that if there is an ID column - generally the primary key- then that should be used for the sub-query joins for improved performance.

    Barth : Indeed this is a lot better than the query I had previously. Thank you !
    BrynJ : I've just updated my answer with a possible solution.

Adding items to a List<T> / defensive programming

Explicitly checking/handling that you don't hit the 2^31 - 1 (?) maximum number of entries when adding to a C# List is crazyness, true of false?

(Assuming this is an app where the average List size is less than a 100.)

From stackoverflow
  • Seems excessive. Would you not hit the machine's memory limit first, depending on the size of the objects in your list ? (I assume this check is performed by the user of the List class, and is not any check in the implementation?)

    Perhaps it's reassuring that colleagues are thinking ahead though ? (sarcasm!)

  • It would seem so, and I probably wouldn't include the check but I'm conflicted on this. Programmers once though that 2 digits were enough to represent the year in date fields on the grounds that it was fine for the expected life of their code, however we discovered that this assumption wasn't correct.

    Look at the risk, look at the effort and make a judgement call (otherwise known as an educated guess! :-) ). I wouldn't say there's any hard or fast rule on this one.

  • As in the answer above there would more things going wrong I suspect than to worry about that. But yes if you have the time and inclination that you can polish code till it shines!

  • True

    (well you asked true or false..)

  • 1. Memory limits

    Well, size of System.Object without any properties is 8 bytes (2x32 bit pointers), or 16 bytes in 64-bit system. [EDIT:] Actually, I just checked in WinDbg, and the size is 12bytes on x86 (32-bit).

    So in a 32-bit system, you would need 24Gb ram (which you cannot have on a 32-bit system).

    2. Program design

    I strongly believe that such a large list shouldn't be held in memory, but rather in some other storage medium. But in that case, you will always have the option to create a cached class wrapping a List, which would handle actual storage under the hood. So testing the size before adding is the wrong place to do the testing, your List implementation should do it itself if you find it necessary one day.

    3. To be on the safe side

    Why not add a re-entrance counter inside each method to prevent a Stack Overflow? :)

    So, yes, it's crazy to test for that. :)

  • Just tried this code:

    List<int> list = new List<int>();
    while (true) list.Add(1);
    

    I got a System.OutOfMemoryException. So what would you do to check / handle this?

    Precipitous : You catch OutOfMemoryException. OOMs can usually be handled and the operation retried in a minute or to. E.g. two separate threads ask for lots of memory, only 1 gets it. Try the second later. Can sometimes occur if your app does PDF or image manipulation in memory.
  • If you keep adding items to the list, you'll run out of memory long before you hit that limit. By "long" I really mean "a lot sooner than you think".

    See this discussion on the large object heap (LOB). Once you hit around 21500 elements (half that on a 64-bit system) (assuming you're storing object references), your list will start to be a large object. Since the LOB isn't compacted in the same way the normal .NET heaps are, you'll eventually fragment it badly enough that a large enough continous memory area cannot be allocated.

    So you don't have to check for that limit at all, it's not a real limit.

  • Yes, that is crazyness.

    Consider what happens to the rest of the code when you start to reach those numbers. Is the application even usable if you would have millions of items in the list?

    If it's even possible that the application would reach that amount of data, perhaps you should instead take measures to keep the list from getting that large. Perhaps you should not even keep all the data in memory at once. I can't really imagine a scenario where any code could practially make use of that much data.