Sunday, April 17, 2011

Class containing auto_ptr stored in vector

In an answer to http://stackoverflow.com/questions/700588/is-it-safe-to-store-objects-of-a-class-which-has-an-stdautoptr-as-its-member-v I stated that a class that contained an auto_ptr could be stored in a vector provided the class had a user-defined copy constructor.

There were several comment suggesting that this was not the case, so this question is an attempt to clear the issue up. Consider the following code:

#include <memory>
#include <vector>
using namespace std;

struct Z {};

struct A {

    A( Z z ) 
     : p( new Z(z) ) {} 

    A( const A & a ) 
     : p( a.p.get() ? new Z( *a.p.get()) : 0 ) {}

    // no assigment op or dtor defined by intent

    auto_ptr <Z> p;
};

int main() {
    vector <A> av;    
    Z z;     
    A a(z);
    av.push_back( a );  
    av.push_back( A(z) ); 
    av.clear();    
}

Please examine the above & in your reply indicate where undefined behaviour in the meaning of the C++ Standard could occur for this particular class used in this particular way. I am not interested whether the class is useful, well-behaved, sortable, or how it performs under exceptions.

Please also note that this is not a question about the validity of creating a vector of auto_ptrs - I am well aware of the issues regarding that.

Thanks all for your inputs on what in retrospect is probably a rather silly question. I guess I focussed too much on the copy ctor & forgot about assignment. The lucky winner of my acceptance points (and points mean prizes!) is litb for a typically exhaustive explanation (sorry earwicker)

From stackoverflow
  • Since the regular auto_ptr semantic could suggests that the ownership is passed during the copying, I would rather use here boost::scoped_ptr. Of course the assignment operator is missing.

  • I don't think it's necessarily the case that the above code will even compile. Surely the implementor of std::vector is at liberty to require an assignment operator to be available, from const A&?

    And having just tried it, it doesn't compile on Visual Studio C++ 2008 Service Pack 1:

    binary '=' : no operator found which takes a right-hand operand of type 'const A' (or there is no acceptable conversion)

    My guess is that, on the guidance of Herb Sutter, the container classes in VC++ make every effort to impose the standard requirements on their type parameters, specifically to make it hard to use auto_ptr with them. They may have overstepped the boundaries set by the standard of course, but I seem to remember it mandating true assignment as well as true copy construction.

    It does compile in g++ 3.4.5, however.

    anon : Yes, now you remind me I remember that too - I guess that answers my question :-(
    Daniel Earwicker : Well where my's green tick for being a clever boy then? :)
    anon : All good things come to he who waits.
    Daniel Earwicker : Oh man. The anticipation is almost unbearable!
    Johannes Schaub - litb : that is exactly what i told him on his answer on the other question. you have to be able to assign / copy from "const T" because the requirements state it. not because it might be useful or anything like that. +1 indeed
    Johannes Schaub - litb : here is the output of gcc 4.1: http://codepad.org/P0uxxqxH
  • What about the following?

    cout << av[ 0 ] << endl;
    

    Also, conceptually, a copy should leave the item copied from unchanged. This is being violated in your implementation.

    (It is quite another thing that your original code compiles fine with g++ -pedantic ... and Comeau but not VS2005.)

    Daniel Earwicker : "Also, conceptually, a copy should leave the item copied from unchanged." - try telling that to auto_ptr!
    anon : My question wasn't about the usefulness of the class - obviously it is completely broken, but only about UB. But as Earwicker pointed out I think VC++ may be right for once. Interesting about Comeau though...
    dirkgently : @Earwicker: That was my point about auto_ptrs.
    dirkgently : @Neil Butterworth: You are only looking at part of the class and a special construct that does not invoke UB. The point of my example.
  • Objects stored in containers are required to be "CopyConstructable" as well as "Assignable" (C++2008 23.1/3).

    Your class tries to deal with the CopyConstructable requirement (though I'd argue it still doesn't meet it - I edited that argument out since it's not required and because it's arguable I suppose), but it doesn't deal with the Assignable requirement. To be Assignable (C++2008 23.1/4), the following must be true where t is a value of T and u is a value of (possibly const) T:

    t = u returns a T& and t is equivalent to u

    The standard also says in a note (20.4.5/3): "auto_ptr does not meet the CopyConstructible and Assignable requirements for Standard Library container elements and thus instantiating a Standard Library container with an auto_ptr results in undefined behavior."

    Since you don't declare or define an assignment operator, an implicit one will be provided that uses the auto_ptr's assignment operator, which definitely makes t not equivalent to u, not to mention that it won't work at all for "const T u" values (which is what Earwicker's answer points out - I'm just pointing out the exact portion(s) of the standard).

  • Trying to put the list of places together that makes the example undefined behavior.

    #include <memory>
    #include <vector>
    using namespace std;
    
    struct Z {};
    
    struct A {
    
        A( Z z ) 
            : p( new Z(z) ) {} 
    
        A( const A & a ) 
            : p( a.p.get() ? new Z( *a.p.get()) : 0 ) {}
    
        // no assigment op or dtor defined by intent
    
        auto_ptr <Z> p;
    };
    
    int main() {
        vector <A> av;  
        ...
    }
    

    I will examine the lines up to the one where you instantiate the vector with your type A. The Standard has to say

    In 23.1/3:

    The type of objects stored in these components must meet the requirements of CopyConstructible types (20.1.3), and the additional requirements of Assignable types.

    In 23.1/4 (emphasis mine):

    In Table 64, T is the type used to instantiate the container, t is a value of T, and u is a value of (possibly const) T.

    +-----------+---------------+---------------------+
    |expression |return type    |postcondition        |
    +-----------+---------------+---------------------+
    |t = u      |T&             |t is equivalent to u |
    +-----------+---------------+---------------------+
    

    Table 64

    In 12.8/10:

    If the class definition does not explicitly declare a copy assignment operator, one is declared implicitly. The implicitly-declared copy assignment operator for a class X will have the form

    X& X::operator=(const X&)
    

    if

    • each direct base class B of X has a copy assignment operator whose parameter is of type const B&, const volatile B& or B, and
    • for all the nonstatic data members of X that are of a class type M (or array thereof), each such class type has a copy assignment operator whose parameter is of type const M&, const volatile M& or M.

    Otherwise, the implicitly declared copy assignment operator will have the form

    X& X::operator=(X&)
    

    (Note the last and second last sentence)

    In 17.4.3.6/1 and /2:

    In certain cases (replacement functions, handler functions, operations on types used to instantiate standard library template components), the C++ Standard Library depends on components supplied by a C++ program. If these components do not meet their requirements, the Standard places no requirements on the implementation.

    In particular, the effects are undefined in the following cases:

    • for types used as template arguments when instantiating a template component, if the operations on the type do not implement the semantics of the applicable Requirements subclause (20.1.5, 23.1, 24.1, 26.1). Operations on such types can report a failure by throwing an exception unless otherwise specified.

    Now, if you look at the specification of auto_ptr you will note it has a copy-assignment operator that takes a non-const auto_ptr. Thus, the implicitly declared copy assignment operator of your class will also take a non-const type as its parameter. If you read the above places carefully, you will see how it says that instantiating a vector with your type as written is undefined behavior.

    anon : But my class has an _explicitly_ declared copy constructor, so I don't see how this applies.
    Johannes Schaub - litb : it does not apply to that at all. it's the copy assignment operator that is missing - not the copy constructor. i would say as defined, your copy constructor is all fine.
    anon : oops - my misread - sorry
    Johannes Schaub - litb : Neil, c++98 standard had a typo that said "copy constructor" at one particular place (and i take my quotes from c++98 - only have that). in a revisions list i read c++03 fixed that. maybe it was this that made you think of a copy constructor :) (i already fixed it a hour ago)

What is meant by 'first class object'?

In a recent question, I received suggestions to talk on, amongst other things, the aspect of JavaScript where functions are 'first class' objects. What does the 'first class' mean in this context, as opposed to other objects?

EDIT (Jörg W Mittag): Exact Duplicate: "What is a first class programming construct?"

From stackoverflow
  • To quote Wikipedia:

    In computer science, a programming language is said to support first-class functions (or function literal) if it treats functions as first-class objects. Specifically, this means that the language supports constructing new functions during the execution of a program, storing them in data structures, passing them as arguments to other functions, and returning them as the values of other functions.

    This page also illustrates it beautifully:

    Really, just like any other variable

    • A function is an instance of the Object type
    • A function can have properties and has a link back to its constructor method
    • You can store the function in a variable
    • You can pass the function as a parameter to another function
    • You can return the function from a function

    also read TrayMan's comment, interesting...

    Spoike : Quoting wikipedia is nice and dandy, but the description is written in a language for scientists and not for geeks. What the heck does all that mean anyway? The last sentence in that quote is vagu.
    Sander Versluys : @Spoike, true... provided javascript resource.
    TrayMan : Conveniently a language that has first-class functions also has higher-order functions, as opposed to being limited to first-order functions, which would rule out first-class functions. (Though higher-order, not first-class is possible.)
    ProfK : I found nothing unclear in the Wikipedia quote, but the additional link is excellent.
  • It means that functions are objects, with a type and a behaviour. They can be dynamically built, passed around as any other object, and the fact that they can be called is part of their interface.

  • It means that function actually inherits from Object. So that you can pass it around and work with it like with any other object.

    In c# however you need to refrain to delegates or reflection to play around with functions. (this got much better recently with lambda expressions)

  • i guess when something is first class in a language, it means that it's supported by its syntax rather than a library or syntactic sugar. for example, classes in C are not first class

  • Simple test. If you can do this in your language (Python as example):

    def double(x):
        return x*x
    
    f = double
    
    print f(5) #prints 25
    

    Your language is treating functions as first class objects.

    Thomas L Holaday : But I can do this in C++: int twice(int x) { return x << 1; } int (*f)(int) = twice; std::cout<<(*f)(5)<
    cHao : Til you can create a function inside a function, i want to say no.
  • The notion of "first-class functions" in a programming language was introduced by British computer scientist Christopher Strachey in the 1960s. The most famous formulation of this principle is probably in Structure and Interpretation of Computer Programs by Gerald Jay Sussman and Harry Abelson:

    • They may be named by variables.
    • They may be passed as arguments to procedures.
    • They may be returned as the results of procedures.
    • They may be included in data structures.

    Basically, it means that you can do with functions everything that you can do with all other elements in the programming language. So, in the case of JavaScript, it means that everything you can do with an Integer, a String, an Array or any other kind of Object, you can also do with functions.

Determine path dynamically in Silverlight 2

I have a border with rounded corners within a canvas and want to add a clipping region to the canvas so that anything I add is clipped to the region within the border. I know that I can set the Clip property of the canvas but as the canvas and object are sized dynamically rather than having sizes assigned in the XAML, I can't figure out how to calculate the path to use. Is there some way to derive a PathGeometry from a UIElement (the border in this case)? If not what is the best way to approach this? Here is the XAML for the test page I'm working with.

<UserControl x:Class="TimelinePrototype.Page"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Grid x:Name="LayoutRoot">
 <Grid.RowDefinitions>
  <RowDefinition Height="auto" />
  <RowDefinition />
 </Grid.RowDefinitions>
 <StackPanel Orientation="Horizontal" Margin="10">
  <Button x:Name="cmdDraw" FontSize="18" Click="cmdDraw_Click" Content="Draw" Margin="0,0,5,0" VerticalAlignment="Bottom" />
  <TextBlock x:Name="txtDateRange" FontSize="18" Margin="10,0,10,10" VerticalAlignment="Bottom" />
 </StackPanel>
 <Canvas x:Name="TimelineCanvas" Grid.Row="1" HorizontalAlignment="Stretch" 
    SizeChanged="TimelineCanvas_SizeChanged">
  <Border x:Name="TimelineBorder" 
    Background="LightGray" 
    BorderBrush="Black" 
    BorderThickness="2" 
    CornerRadius="15" 
    Margin="10"
    Grid.Row="1"
    VerticalAlignment="Top">
  </Border>
 </Canvas>
</Grid>

From stackoverflow
  • Try using the ActualHeight and ActualWidth properties

    var height = TimelineCanvas.ActualHeight;
    var width = TimelineCanvas.ActualWidth;
    
    Steve Crane : I had thought of using those but was wondering if there might be some other, more clever way of doing this.
  • I ended up using this code, but would still be interested in any alternate methods.

    RectangleGeometry clipRect = new RectangleGeometry();
    clipRect.Rect = new Rect(TimelineBorder.Margin.Left, TimelineBorder.Margin.Top, TimelineCanvas.ActualWidth - (TimelineBorder.Margin.Left + TimelineBorder.Margin.Right), TimelineCanvas.ActualHeight - (TimelineBorder.Margin.Top + TimelineBorder.Margin.Bottom));
    clipRect.RadiusX = TimelineBorder.CornerRadius.TopLeft;
    clipRect.RadiusY = TimelineBorder.CornerRadius.TopLeft;
    TimelineCanvas.Clip = clipRect;
    
    MojoFilter : I'd have to endorse that method; if only because I've done it that way dozens of times without seeing a nicer approach.
  • Try blacklight

    The blacklight toolpack has a rounded corner clipping tool and is free.

    Steve Crane : Thanks, I'll check it out.

C#: Blowfish Encipher a single dword

Hello,

I'm translating a C++ TCP Client into C#.The client is used to encode 4 bytes of an array using blowfish.

C++ Blowfish

C# Blowfish(C# NET)

C++

    BYTE response[6] = 
 {
  0x00, 0x80, 0x01, 0x61, 0xF8, 0x17
 };

 // Encrypt the last 4 bytes of the packet only(0x01,0x061,0xF8,0x17)
 blowfish.Encode(responce + 2, responce + 2, 4); 

 // Send the packet
 send(s, (char*)sendPtr, sendSize, 0);

C#

    responce  = new byte[6] { 0x00, 0x80, 0x01, 0x61, 0xF8, 0x17};

    // Encrypt the last 4 bytes of the packet only(0x01,0x061,0xF8,0x17)
    Handshake.blowfish.Encrypt(responce, 2, responce, 2, 4);

 // Send the packet
    WS.sock.Send(encrypted);

In the C++ code,when the line "Blowfish.Encode" is called with these parameters,it goes into the cBlowfish.Encrypt function

DWORD cBlowFish::Encode(BYTE * pInput, BYTE * pOutput, DWORD lSize)
{
DWORD  lCount, lOutSize, lGoodBytes;
BYTE *pi, *po;
int  i, j;
int  SameDest =(pInput == pOutput ? 1 : 0);

lOutSize = GetOutputLength(lSize);
for(lCount = 0; lCount < lOutSize; lCount += 8)
{
 if(SameDest) // if encoded data is being written into input buffer
 {
   if(lCount < lSize - 7) // if not dealing with uneven bytes at end
   {
     Blowfish_encipher((DWORD *) pInput, (DWORD *)(pInput + 4));
   }
   else  // pad end of data with null bytes to complete encryption
   {
   po = pInput + lSize; // point at byte past the end of actual data
   j =(int)(lOutSize - lSize); // number of bytes to set to null
   for(i = 0; i < j; i++)
    *po++ = 0;
     Blowfish_encipher((DWORD *) pInput, (DWORD *)(pInput + 4));
   }
   pInput += 8;
 }
 else    // output buffer not equal to input buffer, so must copy
 {               // input to output buffer prior to encrypting
   if(lCount < lSize - 7) // if not dealing with uneven bytes at end
   {
    pi = pInput;
    po = pOutput;
    for(i = 0; i < 8; i++)
    // copy bytes to output
     *po++ = *pi++;
     // now encrypt them
   Blowfish_encipher((DWORD *) pOutput, (DWORD *)(pOutput + 4));
   }
   else  // pad end of data with null bytes to complete encryption
   {
    lGoodBytes = lSize - lCount; // number of remaining data bytes
    po = pOutput;
    for(i = 0; i <(int) lGoodBytes; i++)
     *po++ = *pInput++;
    for(j = i; j < 8; j++)
     *po++ = 0;
     Blowfish_encipher((DWORD *) pOutput, (DWORD *)(pOutput + 4));
   }
   pInput += 8;
   pOutput += 8;
 }
}
return lOutSize;
}

To make it clear,the loop is executed only one time due to the short length of the bytes passed(4).

Only one call is executed from this huge code(only once),the call is:

Blowfish_encipher((DWORD *) pInput, (DWORD *)(pInput + 4));

//meaning the code is passing the first two if statements and then leaves the loop and the function.

From my point of view,the solution is hidden somewhere inside the encipher function:

void cBlowFish::Blowfish_encipher(DWORD *xl, DWORD *xr)
{
union aword Xl, Xr;

Xl.dword = *xl;
Xr.dword = *xr;

Xl.dword ^= PArray [0];
ROUND(Xr, Xl, 1);  
ROUND(Xl, Xr, 2);
ROUND(Xr, Xl, 3);  
ROUND(Xl, Xr, 4);
ROUND(Xr, Xl, 5);  
ROUND(Xl, Xr, 6);
ROUND(Xr, Xl, 7);  
ROUND(Xl, Xr, 8);
ROUND(Xr, Xl, 9);  
ROUND(Xl, Xr, 10);
ROUND(Xr, Xl, 11); 
ROUND(Xl, Xr, 12);
ROUND(Xr, Xl, 13); 
ROUND(Xl, Xr, 14);
ROUND(Xr, Xl, 15); 
ROUND(Xl, Xr, 16);
Xr.dword ^= PArray [17];

*xr = Xl.dword;
*xl = Xr.dword;
}

The definitions:

#define S(x,i)    (SBoxes[i][x.w.byte##i])
#define bf_F(x)   (((S(x,0) + S(x,1)) ^ S(x,2)) + S(x,3))
#define ROUND(a,b,n)    (a.dword ^= bf_F(b) ^ PArray[n])

The problem is that the Blowfish_Encipher function in C++ has two parameters:Input(xl) as dword and Output(xr) as dword.

The C# Blowfish Encrypt_Block function has four parameters,why?

        public void EncryptBlock(uint hi,uint lo,out uint outHi,out uint outLo)

Unlike the C++ blowfish,EncryptBlock calls Encrypt instead Encrypt to call EncryptBlock.Maybe EncryptBlock is NOT the C++ Blowfish_Encipher?

Anyway,my problem is that when I call the C++ code with that array of 6 bytes requesting the blowfish to encode only the last 4 bytes,it does it.

While If I call the encrypt function in C# with those 4 bytes,it returns 0x00.(If you'd like to see the C# Blowfish,check my first lines - I have added a hyperlink there).

Note I can't change the packet structure,it should be just like that,but encrypted.

I also tried this:

Knowing the C++ Encrpypt functions executes only one call - blowfish Encipher.I tried to call EncryptBlock in C# directly,but there are Hi Uint32 and Low Uint32 as input and output,how to spread them into HI or LO? Will this work if the Encrypt_Block calls blowfish Encrypt in C#? I'm quite not sure.

Thank you in advance!

From stackoverflow
  • Hi John

    Blowfish works on eight byte blocks. The only way to encrypt data that falls short of eight bytes (or a multiple of eight) is to pad it out (in this case with zeroes).

    You need to pass an eight byte buffer into your C++ function, since you are encrypting in place. The code you posted will actually encrypt four additional bytes of adjacent memory ((DWORD *)(pInput + 4)), which is obviously not what you want. Furthermore, all eight output bytes are required in order to decrypt - so, unfortunately, you can't just pass four of the encrypted bytes and expect them to be decrypted successfully at the other end.

    I know this doesn't solve your problem - I don't see any way to solve it, since you want to send only four bytes of encrypted data and Blowfish always produces a minimum of eight!

.Net DefaultValueAttribute on Properties

I got this code in a user control:

[DefaultValue(typeof(Color), "Red")]
public Color MyColor { get; set; }

How can I change MyColor to be its default value?

From stackoverflow
  • The DefaultValueAttribute does not set the property to the value, it is purely informational. The Visual Studio designer will display this value as non-bold and other values as bold (changed), but you'll still have to set the property to the value in the constructor.

    The designer will generate code for the property if the value was set by the user, but you can remove that code by right clicking on the property and clicking Reset.

  • Are you initializing MyColor in your constructor?

    The DefaultValue attribute does not actually set any values. It simply instructs the designer for which value to not generate code and will also show the default value non-bold to reflect this.

  • The "DefaultValue" attribute does not write code for you... but rather it is used for you to tell people (such as Mr Property Grid, or Mr Serializer Guy) that you plan to set the default value to Red.

    This is useful for things like the PropertyGrid... as it will BOLD any color other than Red... also for serialization, people may choose to omit sending that value, because you informed them that it's the default :)

  • DefaultValueAttribute is not used by the compiler, and (perhaps confusingly) it doesn't set the initial value. You need to do this your self in the constructor. Places that do use DefaultValueAttribute include:

    • PropertyDescriptor - provides ShouldSerializeValue (used by PropertyGrid etc)
    • XmlSerializer / DataContractSerializer / etc (serialization frameworks) - for deciding whether it needs to be included

    Instead, add a constructor:

    public MyType() {
      MyColor = Color.Red;
    }
    

    (if it is a struct with a custom constructor, you need to call :base() first)

  • It is informal, but you can use it via reflection, for example, place in your constructor the following:

     foreach (FieldInfo f in this.GetType().GetFields())
     {
      foreach (Attribute attr in f.GetCustomAttributes(true))
      {
       if (attr is DefaultValueAttribute)
       {
        DefaultValueAttribute dv = (DefaultValueAttribute)attr;
        f.SetValue(this, dv.Value);
       }
      }
     }
    
    Marc Gravell : In the example given, the attribute is set against the property, not the field.
    Melursus : I adapt your code for property and it's work well, thx!
    Yossarian : Ok, so - rewrite - foreach (FieldInfo f in this.GetType().GetFields()) as foreach (PropertyInfo f in this.GetType().GetProperties())
    Samuel : Wait wtf? Why is this accepted? If this answers your question, you need to rewrite your question.

How to pass variable to a function created through the guide

I developed a form using with guide, i want to send a variable to that function through command line how it is possible if aanybody know s please tell to me

thanks in advance

From stackoverflow
  • I have no idea what you want to do exactly, but you may probably want to use the figure's UserData property:

    Passing somevar when opening the form myfig:

    h = myfig('UserData', somevar);
    

    or later:

    h = myfig();
    [...]
    set(h, 'UserData', somevar);
    

    In the figure you can access the property with:

    function some_Callback(hObject, eventdata, handles)
        somevar = get(hObject, 'UserData');
    

    See link text and link text

  • The links supplied by ymihere look very helpful. In addition, some of the options (nested functions and using GUIDATA) discussed at those links are addressed in another post on SO: How to create a GUI inside a function in MATLAB? There are a couple of examples there of how the code looks for each case.

    I am personally partial to using nested functions, as I feel like it creates shorter, cleaner code in most cases. However, it's probably the more difficult of the methods for sharing application data if you are a newer MATLAB user (it can take a little getting used to). The easiest option for you may be to set the 'UserData' property on your call to your function (as suggested by ymihere). If you saved your GUIDE GUI to "myGUI.m", then you would call:

    >> hGUI = myGUI('UserData','hello');
    

    where hGUI is a handle to your GUI object. You can then get the 'UserData' property to see that it contains the string 'hello':

    >> get(hGUI,'UserData')
    
    ans =
    
    hello
    

    Instead of 'hello', you can put anything you want, like a structure of data. You should be able to access the 'UserData' field of the figure from within the callbacks of your GUIDE m-file. You will have to get the figure handle from the handles argument passed to your callbacks.

    EDIT: One drawback to using the 'UserData' property, or some of the other methods which attach data to an object, is that the data could be accidentally (or intentionally) overwritten or otherwise corrupted by the user or other applications. The benefit of using nested functions to share data between your GUI callbacks is that it insulates your code from anything the user or another application might do. Conversely, using global variables can be rather dangerous.

Looking for Recommendation on Windows Forms .Net Resizing Component

By default windows forms resize logic is limited--anchoring and docking. In the past I've rolled my own custom resize logic when required. However, I'm getting started on a project that has a large number of very complex forms that must auto-resize to different resolutions. I don't care to invest a ton of time in resize logic.

I see that there are companies selling components that advertise uniform resizing. Does anyone have any experience with any resizing components/have any recommendations?

From stackoverflow
  • Have you looked at the TableLayoutPanel? It should allow you have different "cells" each containing a single UI element and have all the cells grow at the same rate.

  • Aye, TableLayoutPanel and setting AutoSize to True on the form can be quite powerful, but it takes a bit to understand what is going on, but if you have a few hours to get used to it, you can make some awesome dialogs without having to do a lot of work.

  • I found a component .net resize which seems to work really well. Simply drop it on the form and it makes the form completely resizable. Unfortunately, at $178 a seat it's a bit on the expensive side.

  • If you don't want to buy anything and the TableLayoutPanel is not good enough for your needs (which would mean you have some very special needs), you could always create a component yourself to manage the resize, which could work for all your forms. (a bit like .net resize you described above)

    You could also take into calculation the time it would require you to create something that does the same work as .net resize. If the time versus cost seems similar, depending on your deadlines, you might prefer to code it yourself so you have full control.

  • See following:
    http://urenjoy.blogspot.com/2008/11/make-resolution-independent-windows-app.html

Inversion of control for your Inversion of control container?

I've recently had to update a relatively large control library that uses Ninject 1.0 to Ninject 2.0 to help resolve some issues I had with 1.0. The update has gone well and I think Ninject 2.0 is a lot quicker.

However to try and avoid this problem in the future I have created my own interface for injecting fields and properties (which will essentially call methods on the IOC container I wish to use within the current web application). So now my control library is independant of any particular IOC container which will speed up any changes in that area in the future.

I was wondering has anyone else has done the same thing?

I'm happy with what it has achieved but ideally I would like to update it. In my controls I create these injected fields often as protected and set them in the constructor for that control.

IBlogService _blogService = null;
IEmailService _emailService = null;

public Templates_BlogTemplate()
{
    Inject(ref _blogService);
    Inject(ref _emailService);
}

The problem I have with the above is that I have to use "ref" on all objects to actually set the property and I cannot use that on properties directly.

I would prefer to do something along these lines but I don't think it is possible.

IBlogService _blogService = null;
IEmailService _emailService = null;

public Templates_BlogTemplate()
{
    Inject(_blogService, _emailService);
}

Has anyone got any ideas of how to neaten the code up or make it work in a cleaner way? I also would want to avoid attributes so it forces the developer to make a decision to inject the variable at a certain point within the control.

All thoughts and feelings are welcome.

Thanks

From stackoverflow
  • Support property injection, and inject dependencies to "this".

    In my case, I have a base class that calls StructureMap.BuildUp(this), and the user control would have properties like:

    public IBlogService _blogService{get;set;}
    public IEmailService _emailService{get;set;}
    

    The only line specific to structuremap I have is in the base class. If ninject allows you to do this, then you can call your code handing it the control instance, and let it inject the properties based on its configuration.

  • You may want to look at IServiceLocator as described by Glenn Block

    It's a shared interface that can be used to take advantage of IoC without taking a hard dependency on the container.

indentation of multiline string

I have a script that uses the cmd Python module. The cmd module uses a triple quoted multiline string as it's help text. Something like this

def x(self, strags = None):
    """class
    help text here
    and some more help text here"""

When running the script, the command 'help x' will print the string. It will, however, print the newlines in front of the last two lines as well. I can overcome this by not indenting these lines, but that'll make my code ugl{y,ier}.

How to overcome this indenting problem? How do the pro Python coders handle this?

From stackoverflow
  • I'd handle it by having consistent indents, like this:

    def x(self, strags = None):
        """
        class
        help text here
        and some more help text here
        """
    

    Sure, it takes two lines more, but it also injects clarity (in my opinion) by making the doc comment stand out quite well.

  • Personally I try to follow PEP 8 which refers the reader to PEP 257 for Docstring Conventions. It has an entire section on multi-line docstrings.

Why would javascript click-areas not be working in IE8?

I'm trying to find a bug in an old ASP.NET application which causes IE8 to not be able to click on the following "button" area in our application:

<td 
    width="150px" 
    class="ctl00_CP1_UiCommandManager1i toolBarItem" 
    valign="middle" 
    onmouseout="onMouseOverCommand(this,1,'ctl00_CP1_UiCommandManager1',0,0);" 
    onmouseover="onMouseOverCommand(this,0,'ctl00_CP1_UiCommandManager1',0,0);" 
    onmousedown="onMouseDownCommand(this, 'ctl00_CP1_UiCommandManager1', 0, 0);" 
    onmouseup="onMouseUpCommand(this, 'ctl00_CP1_UiCommandManager1', 0, 0);" 
    id="ctl00_CP1_UiCommandManager1_0_0">

    <span style="width:100%;overflow:hidden;text-overflow:ellipsis;vertical-align:middle;white-space:nowrap;">
     NEW
    </span>
</td>

When we switch IE8 to IE7 compatibility mode, the problem disappears, IE7 is able to click on it.

Since the above HTML is generated by a third party control (Janus, http://www.janusys.com/controls), we don't have the source code.

  • has anyone experienced any similar problems with IE8?
  • I've determined that it actually fires the onMouseDownCommand command
  • also the CSS of the button area is different in IE8, it doesn't have color shading that it does in IE7. I can imagine that somewhere the HTML is not valid and IE8 being stricter is not playing along, but where?
  • any advice on how to narrow in on this bug welcome

ANSWER:

Turned out to be that the application was not checking the navigator.agent for "MSIE 8.0" and was thus treating IE8 has a non-Internet-Explorer browser.

Thanks Lazarus for the tip, the IE8 Javascript debugger is very nice, like a Firebug for IE, will be using it more!

From stackoverflow
  • F12 on IE8 gets you to the developer tools which has js debugging, that would probably be my first stop.

  • This events in IE8 only works in an A element.

Company doesn't want to use ASP.NET ajax...what can I do?

Developers and management tell me that they want to move away from using asp.net ajax because it is big and cumbersome.

I kind of agree, but I don't want to do all the javascript heavy lifting myself. Eventually I also want to introduce jQuery. I'm guessing right now it will also be a problem.

Is there a good post somewhere outlining pros/cons of using ajax.net vs your own custom libraries?

From stackoverflow
  • If

    they want to move away from using asp.net ajax because it is big and cumbersome

    Then JQuery is the exact answer for this.

    Marc Gravell : Especially when combined with ASP.NET MVC
    Spoike : …and ASP.NET MVC comes delivered with JQuery.
  • I don't know about a post, but you can easily implement AJAX without using any kind of library (if what you mean is really AJAX, and not all the helper stuff that gets lumped into AJAX like field validation, DOM abstraction, etc).

    This page taught me all I needed to know about real and true AJAX. http://www.hunlock.com/blogs/AJAX_for_n00bs

    JoshBerke : There is Ajax and then there is MS Ajax, the first being the idea...I totally agree with what you said
    Matt Dawdy : Actually, you've got AJAX, then you've got tons of other things that get lumped into AJAX. MS's tools give you tons of good stuff, and jQuery gives you even MORE good stuff...but none of that crap is really AJAX. It's "Web 2.0"...for lack of a better term. And we ARE lacking!
  • You need to convince your manager of something. That's an art form you learn to perfect :-)

    Show them equivalent bits of code for doing a simple function in ASP, straight JS and jQuery, and choose a sample that ensures the straight JS version is large and hideous.

    Tell them you fully agree with their concerns on ASP (butter them up, that always works well) but that you have concerns on quality and timeliness of delivery (this will scare the living daylights out of any manager).

    Your carefully selected samples should convince them that they should move from ASP to jQuery rather than ASP to straight JS. Or, worst case, they'll stay with ASP for a bit longer.

    Both these sound acceptable to you since they don't involve heavy lifting.

    I like to take my cues from the "Yes, Prime Minister" show where Sir Humphrey once commented (paraphrased):

    Give them three options, two of which can be shown to eventually culminate in World War III, then let them think about it for a bit.

    Aaron : Brilliant post!!
  • Microsoft just added jquery intellisense to VS2008. That should answer your question. However, Google hosts several right here. http://code.google.com/apis/ajaxlibs/ All it takes it one line of code in your webpages.

  • Even if you use jQuery (which I highly reccomend), unless you go to the extent of creating pages that just return data for your each your ajax requests, you are still going to need(want) a framework on the server side also.

    I think the main complaint with asp.net ajax is all the scripts it includes on the client side(script helpers). The update panel also get's abused since they are so easy to make use of.

    I found that with using jQuery, you can still use asp.net ajax WebMethod()'s but use only jQuery to make the calls to them.

    This feels like a best of both worlds to me. You get to use the WebMethods and not mess up your project with a-page-per-function, but you can skip including any of the Microsoft javascript libraries on the client side.

    More info on calling WebMethod's directly from jQuery here

    • JQuery Is lightweight (19KB)
    • Cross browser compatible
    • have a great UI library
    • Plenty of plug-ins
    • and a good documentation
    • good support for ajax
  • Have a look at ASP.NET MVC. It addresses the concerns of your management in three areas:

    1. It drops the used of webforms, arguably the biggest reason for ASP.NET's bulk. Without webforms you are coding "closer to the metal" and so you have tight control over your code and user interaction while still having a framework in place for state management.
    2. It uses a well defined, easy to follow, folder structure to enforce the Model-View-Controller (MVC) model. Unlike webforms, which was supposed to separate business logic from user interaction, the MVC model strongly encourages the developer to make that necessary separation. It is the separation, more than just about anything else, that keeps your code from becoming cumbersome.
    3. It is well integrated with jQuery. On the view side, with the exception of some helper classes to get your data into your view efficiently, you're working with pure HTML/CSS/JavaScript. ASP.NET MVC officially supports jQuery and is the preferred way of integrating JavaScript.

how do I pull a remote tracking branch while in the master branch

I'm in master and I do a git pull, I get a message indicating I need to pull a remote branch.

Typically I do a git co [branch] then git pull/rebase.

Is there a way to git pull/rebase of [branch] without having to do a git co [branch] first?

From stackoverflow
  • git pull origin branch

    Sorry, forgot to mention that pull merges.

    If you can use git fetch origin branch to just fetch that branch. http://www.kernel.org/pub/software/scm/git/docs/git-fetch.html

    Scott : This appears to try to merge the branch with master.
  • I don't think there is any way to do an operation on a branch without switching to it. But you can combine the pull and rebase by doing git pull --rebase origin

    Scott : Aye. I'm lazy, so I just set the rebase=true option for the branch in my .git/config.
  • If you combine Apikot and Jeremy you get:

    git pull --rebase origin branch
    

    Or, if you haven't set the branch mapping up in your config:

    git pull --rebase origin <src>:<dst>
    

Live Writer Image Posting

I have a Blog hosted on my site, the blog engine I choose is Subtext, I'm using Windows Live Writer as a content editor. First of all I cannot post images in posts, and even if the posting action goes well, I cannot see images when I go on the post page. Second, how to setup Subtext to accept images well on posts?

From stackoverflow
  • The Subtext forum seems pretty quiet. Try Googling Subtext Live Writer, there seems to be plenty of people blogging about it :)

    Hoghweed : All the things you said I've done before, I've lost half a day googling for that.. I was hoping that ads developers, here someone could help me but maybe I was mistake

Can a set in MySQL be 0?

Can a set in MySQL be 0?

From stackoverflow
  • Sure, why not?

    CREATE TABLE t_set (id INT NOT NULL PRIMARY KEY, value SET('one', 'two'));
    
    INSERT
    INTO    t_set
    VALUES  (1, 1);
    
    SELECT  *
    FROM    t_set;
    
    
    id    value
    ----  ------
       1  one
    
    UPDATE  t_set
    SET     value = 0;
    
    SELECT  *
    FROM    t_set;
    
    id    value
    ----  ------
       1
    
  • A SET is a string object that can have zero or more values, each of which must be chosen from a list of allowed values specified when the table is created. SET column values that consist of multiple set members are specified with members separated by commas (“,”). A consequence of this is that SET member values should not themselves contain commas.

    For example, a column specified as SET('one', 'two') NOT NULL can have any of these values:

    '' 'one' 'two' 'one,two'

    http://dev.mysql.com/doc/refman/5.1/en/set.html

Have you ever found graphical modeling (UML, CASE tools) to be useful in software development?

I am curious if you might share your experiences developing software using graphical models of any kind. Flowcharts, state diagrams, whatever. Have you found any graphical tools that dramatically increased your productivity over working in pure code?

From stackoverflow
  • These tools aren't meant to simply replace some of your programming time. They are tools to aid in the analysis and design phases of software development. Even if you don't use any of the CASE elements (code generation), then UML certainly has its place as a universal graphical language of software analysis and design.

    How do you measure productivity? If you mean lines of code per minute, then no, not particularly. But this is just programming - software development is much more than that!

  • I found those useful as sketches, to communicate my ideas quickly; sequence diagrams, some state machine diagrams are useful to document a project also.

  • The great thing about graphical representation is that you get a feel for how the application is going to look and feel before you jump into coding. This, in fact, speeds up the programming process because you dont have to stop and think (for the most part) about what a class is going to do or how they are going to relate to other classes and the application.

    A good tool to use us Visio. I think its fun to use and its pretty easy.

    Happy Coding!

  • Yes it becomes very apparent to me how to work new features into my database schema when I draw out a crowsfoot ER diagram. Solutions or a current problem become obvious and easy to spot.

    I will add that pencil and paper is all I use for this, so my answer to your question about "graphical tools" may be no, never used them.

  • I like Visio for visualizing solution, especially useful when writing specs and explaining my thought patterns for other developers. UML is also very useful to know as it gives you a language to express yourself that is easily understood (once you learn it).

  • In a word: No.

    I've found that they don't warrant the time it takes to create them and since things in business change rapidly, keep them up to date.

    The best modeling technique I've found most efficient is a large pad of paper and a sharpie. It serves the purpose of getting the ideas out, figuring out hierarchies, where things belong, etc...

    Its cheap and fast and you don't get attached to it. And, its very easy to create new ones if someone comes up with a better idea.

    T.E.D. : Heh. Its funny how we gave the *opposite* short answers, and the same long answers. :-)
  • The dedicated UML tools are pretty useless, but some of tne diagrams can be helpful. The best design tool I've come across is one of those whiteboards that allows you to capture the drawing as a paper copy. I then use an ordinary graphics tool like Visio to draw and maintain copies of the diagrams for the project documentation.

    Pete Kirkham : A mobile phone camera is good enough most the time. The self-printing whiteboards I've seen were very unreliable.
  • We do not need a UML diagram to represent the program for 1+2=3.

    But UML and other modelling techniques are standards of SDLC (Software Development Life Cycle) and is a necessary. It is a matter of communication and understanding.

  • In short, hell yes.

    Whenever I'm creating a reasonably complex program de-novo, I always need to start design with a pencil and paper (or better yet, a whiteboard). Trying to just sit down and pound something complex out with no design is a tremendous waste of time.

    I also will go to the whiteboard whenever I have some fairly complex code I need to figure out the structure of. For example, my whiteboard currently contains a call graph from a program I just could not get a handle on. Once I had it graphed, it was easy to see why. From main down to the lowest-level subroutine it ended up being 11 different routines from 3 different source files called from within each other, including one in the middle invoked as a callback and one more near the bottom that also calls itself recursively (so the call stack itself isn't 11, but way higher).

    Now with the graph I can just look up at the whiteboard whenever I forget where in the chain I am. If you can keep all that in your head as well as me without the whiteboard, I humbly suggest you should be off counting cards in Vegas and not wasting your time coding.

    Jimmy : no doubt whiteboarding is productive, but isn't the question about UML/CASE tools?
    T.E.D. : The first sentence is "...share your experiences developing software using graphical models of any kind".
    Brian : That is pretty funny...
  • UML? Yes. Very useful. However, I'm not a fan of software tools; I've never had much luck with auto-routing lines, and have to waste a lot of time moving them around. When it comes time to make a change, it's a pain. I'm much more productive and comfortable with a whiteboard or a notebook.

    The programs are great if you need to make a clean diagram for a presentation. They aren't too useful for design.

  • I find them very useful for mapping programs that are already out there - part of my job is modernizing huge amounts of hacked scripts that have emerged over time to be very important.

    UML allows me to work out what the current thinking is - simplify it - update it then finally put it back into place.

  • No, but yes.

    I find modelling tools to be far too nitpicky about how I want to express an idea, and I spend more time "tweaking" the diagrams than actually drawing stuff out. However, when I am designing a class heirarchy or a state diagram, I do use the modelling diagrams.

    I just do them with pencil and paper instead.

  • I'd also like to add it is useful for things such as the Common Information Model (see CIM under www.dmtf.org) because without UML, it'd be very difficult to explain.

  • There's a difference between modelling and models.

    Initially in the design process, the value in producing a model is that you have to to get to a concrete enough representation of the system that it can be written down. The actual models can be, and probably should be, temporary artefacts such as whiteboards, paper sketches or post-it notes and string.

    In businesses where there is a requirement to record the design process for auditing, these artefacts need to be captured. You can encode these sketchy models in a UML tool, but you rarely get a great deal of value from it over just scanning the sketches. Here we see UML tools used as fussy documentation repositories. They don't have much added value for that use.

    I've also seen UML tools used to convert freehand sketches to graphics for presentations. This is rarely a good idea, for two reasons -

    1. most model-based UML tools don't produce high quality diagrams. They often don't anti-alias correctly, and have apalling 'autorouting' implementations.
    2. understandable presentations don't have complicated diagrams; they show an abstraction. The abstraction mechanism in UML is packages, but every UML tool also has an option to hide the internals of classes. Getting into the habit of presenting UML models with the details missing hides complexity, rather than managing it. It means that a simple diagram of four classes with 400 members get through code review, but one based on a better division of responsibilities will look more complicated.

    During the elaboration of large systems (more than a handful of developers), it's common to break the system into sub-systems, and map these sub-systems to packages (functionally) and components (structurally). These mappings are again fairly broad-brush, but they are more formal than the initial sketch. You can put them into a tool, and then you will have a structure in the tool which you can later populate. A good tool will also warn you of circular dependencies, and (if you have recorded mappings from use cases to requirements to the packages to which the requirements are assigned) then you also have useful dependency graphs and can generate Gantt charts as to what you need for a feature and when you can expect that feature to ship. (AFAIK state-of-the art is dependency modelling and adding time attributes, but I haven't seen anything which goes as far as Gantt.)

    So if you are in a project which has to record requirements capture and assignment, you can do that in a UML tool, and you may get some extra benefit on top in terms of being able to check the dependencies and extract information plan work breakdown schedules.

    Most of that doesn't help in small, agile shops which don't care about CMMI or ISO-9001 compliance.

    (There are also some COTS tools which provide executable UML and BPML models. These claim to provide a rapid means to de-risk a design. I haven't used them myself so won't go into details.)

    At the design stage, you can model software down to modelling classes, method and the procedural aspects of methods with sequence diagrams, state models and action languages. I've tended not to, and prefer to think in code rather than in the model at that stage. That's partly because the code generators in the tools I've used have either been poor, or too inflexible for creating high quality implementations.

    OTOH I have written simulation frameworks which take SysML models of components and systems and simulate their behavior based on such techniques. In that case there is a gain, as such a model of a system doesn't assume an execution model, whereas the code generation tools assume a fixed execution model.

    For a model to be useful, I've found it important to be able to decouple the domain model from execution semantics. You can't represent the relation f = m * a in action semantics. You can only represent the evaluation followed by the assignment f := m * a, so to get a general-purpose model that has three bidirectional ports f, m and a you'd have to write three actions, f := m * a, m := f / a, a := f / m. So in a model where a single constraint of a 7-ary relation will suffice, if your tool requires you to express it in action semantics you have to rewrite the relation 7 times. I haven't seen a COTS UML tool which can process constraint network models well enough to give a sevenfold gain over coding it yourself, but that sort of reuse can be made with a bespoke engine processing a standard UML model. If you have a rapidly changing domain model and then build your own interpreter/compiler against the meta-model for that domain, then you can have a big win. I believe some BPML tools work in a similar way to this, but haven't used them, as that isn't a domain I've worked.

    Where the model is decoupled from the execution language, this process is called model driven development, and Matlab is the most common example; if you're generating software from a model which matches the execution semantics of the target language it's called model driven architecture. In MDA you have both a domain and an implementation model, in MDD you have a domain model and a specialised transformation to map the domain to multiple executable implementations. I'm a MDD fan, and MDA seems to have little gain - you're restricting yourself to whatever subset of the implementation language your tool supports and your model can represent, you can't tune to your environment, and graphical models are often much harder to understand than linear ones - we've a million years evolution constructing complex relationships between individuals from linear narratives, (who was Pooh's youngest friend's mother?) whereas constructing an execution flow from several disjoint graphs is something we've only had to do in the last century or so.

    I've also created domain specific profiles of UML, and used it as a component description language. It's very good for that, and by processing the model you can create custom configuration and installation scripts for a complicated system. That's most useful where you have a system or systems comprising of stock components with some parametrisation.

    When working in environments which require UML documentation of the implementation of a software product, I tend to reverse engineer it rather than the other way.

    When there's some compression of information to be had by using a machine-processable detailed model, and the cost of setting that up code-generation of sufficient quality is amortized across multiple uses of the model or by reuse across multiple models, then I use UML modelling tools. If I can spend a week setting up a tool which stamps out parameterised components like a cookie-cutter in a day, and it takes 3 days to do it by hand, and I have ten such components in my systems, then I'll spend that week tooling up.

    Apply the rules 'Once and Once Only' and 'You Aren't Gonna Need It' to the tools as much as to the rest of your programming.

    So the short answer is yes, I've found modelling useful, but models are less so. Unless you're creating families of similar systems, you don't gain sufficient benefit from detailed models to amortize the cost of creating them.

  • Yes! For me UML class diagrams and alike turned out to be very useful for at least two purposes:

    1 To explain design ideas to colleagues and to improve on the initial design together. You really do not want to do this by going through code line by line, and often you don't have any code yet at this stage.

    2 To document the design, but always as an illustration to clarify a textual story.

    Also note that UML class diagrams can exist in two forms: diagrams of a domain model and diagrams of source code. Domain model diagrams show how you see the customer's problem before you make the translation to code. See the book of Larman: Applying UML and patterns.

  • Try this: StarUML (it rocks!)

  • yes, modeling saves time in design , saves time in code generation rather than paper designs and hand written code. All it needs is expertize on the tool we are using!!

  • Yes.

    Having some visual model of key parts of the system is essential for making sure that you're arguing about the same thing.

    The specifics vary by project and domain. On several past projects, a few interaction diagrams were key to shared understanding. On another project it was a big E-R diagram. One project had a handful of state transition diagrams. The specifics were less important than the fact that we had a few diagrams on a wall that we could gather around to discuss how to grow the system incrementally, with some assurance that we had a shared understanding of the key bits.

    As far as tools, a big whiteboard, a good set of markers, and a digital camera can take you pretty far. Or flip charts and blue tape.

    The problem with tools is that they have a learning curve with a high opportunity cost for everyone to learn. And if only a few people can drive the tool, the diagrams get frozen or stale pretty fast. And people up the chain tend to had excess faith in nice looking pictures.

  • As a technical leader I was always explaining sections of the code-base to team-members. I'm a visual thinker, so as I talked I drew diagrams. And I was always drawing the same pictures.

    My solution was to draw them as UML diagrams using Visio and put them on the company Wiki.

    When somebody asked me a question I could save time by saying "Lets bring up this diagram". And then the developer would use this as a constant reference.

    So yes, the UML diagramming techniques and Visio improved my own productivity.

  • If you'd like to focus on building you model and not manipulating objects on diagram, you must try Red Koda Community. Check the one minute sequence diagram video, you can see how easy and fast you can use it with the aid of short cut keys.