Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 160/196 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Magic Method __set() on a Instantiated Object

    - by streetparade
    Ok i have a problem, sorry if i cant explaint it clear but the code speaks for its self. i have a class which generates objects from a given class name; Say we say the class is Modules: public function name($name) { $this->includeModule($name); try { $module = new ReflectionClass($name); $instance = $module->isInstantiable() ? $module->newInstance() : "Err"; $this->addDelegate($instance); } catch(Exception $e) { Modules::Name("Logger")->log($e->getMessage()); } return $this; } The AddDelegate Method: protected function addDelegate($delegate) { $this->aDelegates[] = $delegate; } The __call Method public function __call($methodName, $parameters) { $delegated = false; foreach ($this->aDelegates as $delegate) { if(class_exists(get_class($delegate))) { if(method_exists($delegate,$methodName)) { $method = new ReflectionMethod(get_class($delegate), $methodName); $function = array($delegate, $methodName); return call_user_func_array($function, $parameters); } } } The __get Method public function __get($property) { foreach($this->aDelegates as $delegate) { if ($delegate->$property !== false) { return $delegate->$property; } } } All this works fine expect the function __set public function __set($property,$value) { //print_r($this->aDelegates); foreach($this->aDelegates as $k=>$delegate) { //print_r($k); //print_r($delegate); if (property_exists($delegate, $property)) { $delegate->$property = $value; } } //$this->addDelegate($delegate); print_r($this->aDelegates); } class tester { public function __set($name,$value) { self::$module->name(self::$name)->__set($name,$value); } } Module::test("logger")->log("test"); // this logs, it works echo Module::test("logger")->path; //prints /home/bla/test/ this is also correct But i cant set any value to class log like this Module::tester("logger")->path ="/home/bla/test/log/"; The path property of class logger is public so its not a problem of protected or private property access. How can i solve this issue? I hope i could explain my problem clear.

    Read the article

  • Object inside of array -- works in one scope but not in another?

    - by Earlz
    Ok I've been learning some of the more advanced aspects of Javascript and now trying to use this I'm stuck. Here is my code: function Data(){} function init(state){ var item; item=new Data(); item.fieldrid=17; item.description='foo'; state.push(item); }; function findInState(state,fieldrid) { for (var item in state) { alert(item.fieldrid); //prints undefined if (item.fieldrid == fieldrid) { return item; } } return null; } var s=[]; init(s); alert(s[0].fieldrid); //prints 17 (expected) alert(findInState(s,17).fieldrid); //exception here. function returns null. A running example is here at jsbin Why does this not work? I would expect the alert in findInState to yield 17 but instead it yields undefined. What am I doing wrong?

    Read the article

  • Usage Rails 3.0 beta 3 without ActiveRecoord ORM

    - by Anton
    Hi everybody! Just installed Rails 3.0 beta 3 in Windows 7. And started playing with some easy examples class SignupController < ApplicationController def index @user = User.new(params[:user]) if method.post? and @user.save redirect_to :root end end end class User def initialize(params = {}) @email = params[:email] @passw = params[:passw] end def save end end <div align="center"> <% form_for :user do |form| %> <%= form.label :email %> <%= form.text_field :email %><br /> <%= form.label :password %> <%= form.text_field :password %><br /> <%= form.submit :Register! %> <% end %> </div> When I go to /signup I'm getting this error NoMethodError in SignupController#index You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.[] Is there a problem with constructor or what's wrong?Please, need your help!

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • Specializing function templates outside class temp. definition - what is the correct way of doing t

    - by LoudNPossiblyRight
    I am attempting to specialize a function template that is a member of a template class. The two of them have different template parameters. The template function specialization inside the temp. class definition is never called and the one func. spec. outside the class definition does not even compile. Should i expect this to work in the first place, and if so, what do i have to change in this code to both compile and make it work correctly: using VS2010 #include<iostream> using namespace std; template <typename T> class klass{ public: template <typename U> void func(const U &u){ cout << "I AM A TEMPLATE FUNC" << endl; } //THIS NEVER GETS CALLED !!! template <> void klass<T>::func(const string &s){ cout << "I AM A STRING SPECIALIST" << endl; } }; //THIS SPECIALIZATION WILL NOT COMPILE !!! template <typename T> template <> void klass<T>::func(const double &s){ cout << "I AM A DOUBLE SPECIALIST" << endl; } int main(){ double d = 3.14159265; klass<int> k; k.func(1234567890); k.func("string"); k.func(3.14159265); return 0; }

    Read the article

  • How to support comparisons for QVariant objects containing a custom type?

    - by Tyler McHenry
    According to the Qt documentation, QVariant::operator== does not work as one might expect if the variant contains a custom type: bool QVariant::operator== ( const QVariant & v ) const Compares this QVariant with v and returns true if they are equal; otherwise returns false. In the case of custom types, their equalness operators are not called. Instead the values' addresses are compared. How are you supposed to get this to behave meaningfully for your custom types? In my case, I'm storing an enumerated value in a QVariant, e.g. In a header: enum MyEnum { Foo, Bar }; Q_DECLARE_METATYPE(MyEnum); Somewhere in a function: QVariant var1 = QVariant::fromValue<MyEnum>(Foo); QVariant var2 = QVariant::fromValue<MyEnum>(Foo); assert(var1 == var2); // Fails! What do I need to do differently in order for this assertion to be true? I understand why it's not working -- each variant is storing a separate copy of the enumerated value, so they have different addresses. I want to know how I can change my approach to storing these values in variants so that either this is not an issue, or so that they do both reference the same underlying variable. It don't think it's possible for me to get around needing equality comparisons to work. The context is that I am using this enumeration as the UserData in items in a QComboBox and I want to be able to use QComboBox::findData to locate the item index corresponding to a particular enumerated value.

    Read the article

  • Can g++ fill uninitialized POD variables with known values?

    - by Bob Lied
    I know that Visual Studio under debugging options will fill memory with a known value. Does g++ (any version, but gcc 4.1.2 is most interesting) have any options that would fill an uninitialized local POD structure with recognizable values? struct something{ int a; int b; }; void foo() { something uninitialized; bar(uninitialized.b); } I expect uninitialized.b to be unpredictable randomness; clearly a bug and easily found if optimization and warnings are turned on. But compiled with -g only, no warning. A colleague had a case where code similar to this worked because it coincidentally had a valid value; when the compiler upgraded, it started failing. He thought it was because the new compiler was inserting known values into the structure (much the way that VS fills 0xCC). In my own experience, it was just different random values that didn't happen to be valid. But now I'm curious -- is there any setting of g++ that would make it fill memory that the standard would otherwise say should be uninitialized?

    Read the article

  • Freeing ImageData when deleting a Canvas

    - by user578770
    I'm writing a XUL application using HTML Canvas to display Bitmap images. I'm generating ImageDatas and imporingt them in a canvas using the putImageData function : for(var pageIndex=0;pageIndex<100;pageIndex++){ this.img = imageDatas[pageIndex]; /* Create the Canvas element */ var imgCanvasTmp = document.createElementNS("http://www.w3.org/1999/xhtml",'html:canvas'); imgCanvasTmp.setAttribute('width', this.img.width); imgCanvasTmp.setAttribute('height', this.img.height); /* Import the image into the Canvas */ imgCanvasTmp.getContext('2d').putImageData(this.img, 0, 0); /* Use the Canvas into another part of the program (Commented out for testing) */ // this.displayCanvas(imgCanvasTmp,pageIndex); } The images are well imported but there seems to be a memory leak due to the putImageData function. When exiting the "for" loop, I would expect the memory allocated for the Canvas to be freed but, by executing the code without executing putImageData, I noticed that my program at the end use 100Mb less (my images are quite big). I came to the conclusion that the putImageData function prevent the garbage collector to free the allocated memory. Do you have any idea how I could force the garbage collector to free the memory? Is there any way to empty the Canvas? I already tried to delete the canvas using the delete operator or to use the clearRect function but it did nothing. I also tried to reuse the same canvas to display the image at each iteration but the amount of memory used did not changed, as if the image where imported without deleting the existing ones...

    Read the article

  • iPhone SDK: problem managing orientation with multiple view controllers.

    - by Tom
    I'm trying to build an iPhone application that has two subviews in the main window. Each view has its own UIViewController subclass associated with it. Also, within each controller's implementation, I've added the following method: -(BOOL)shouldAutorotateToInterfaceOrientation: (UIInterfaceOrientation)interfaceOrientation { return YES; } Thus, I would expect both of the views to respond to changes in orientation. However, this is not the case. Only the first view added to the app's main window responds to orientation. (If I swap the order the views are added, then only the other view responds. In other words, either will work--but only one at a time.) Why is this? Is it not possible to handle the orientation changes of more than one view? Thanks! EDIT: Someone else had this question, so I'm copying my solution here: I was able to address this issue by providing a root view and a root view controller with the method "shouldAutoRotate..." and adding my other views as subviews to the root view. The subviews inherit the auto-rotating behavior, and their associated view controllers shouldn't need to override "shouldAutoRotate..."

    Read the article

  • Benefits of "Don't Fragment" on TCP Packets?

    - by taspeotis
    One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops). I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet. This affects 1 client in 5000, but since everybody's routes will be different this is expected. The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer. So... Are there are benefits OR justification to setting the DF flag on TCP packets for application data? I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem. If there's no good justification I will be changing the behaviour of our application. Thanks in advance.

    Read the article

  • What happens when ToArray() is called on IEnumerable ?

    - by Sir Psycho
    I'm having trouble understanding what happens when a ToArray() is called on an IEnumerable. I've always assumed that only the references are copied. I would expect the output here to be: true true But instead I get true false What is going on here? class One { public bool Foo { get; set; } } class Two { public bool Foo { get; set; } } void Main() { var collection1 = new[] { new One(), new One() }; IEnumerable<Two> stuff = Convert(collection1); var firstOne = stuff.First(); firstOne.Foo = true; Console.WriteLine (firstOne.Foo); var array = stuff.ToArray(); Console.WriteLine (array[0].Foo); } IEnumerable<Two> Convert(IEnumerable<One> col1) { return from c in col1 select new Two() { Foo = c.Foo }; }

    Read the article

  • Convert Virtual Key Code to unicode string

    - by Joshua Weinberg
    I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion? My current code is below. TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource(); CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData); const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr); if(keyboardLayout) { UInt32 deadKeyState = 0; UniCharCount maxStringLength = 255; UniCharCount actualStringLength = 0; UniChar unicodeString[maxStringLength]; OSStatus status = UCKeyTranslate(keyboardLayout, keyCode, kUCKeyActionDown, 0, LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxStringLength, &actualStringLength, unicodeString); if(actualStringLength > 0 && status == noErr) return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString]; }

    Read the article

  • Is there a free, smale-scale, not web-based issue/bug tracking system?

    - by Doc Brown
    I know, there were posts before here on SO before concerning issue or bug tracking systems, like this one, but the given answers point either to commercial systems or web-based systems, which both seem to be oversized for our needs. What I am looking for is a non-commercial tool for a team of 3 to 4 developers, which can be used on an existing fileserver, without the need of installing additional server software like a C/S database or a web server. Some things I expect from such a system: allows to remember bugs (with a priority) and issues / ideas for new features (mostly without a priority) description of the issue, perhaps some additional remarks short info who entered the bug/issue entry one or more tags allowing us to group or filter the list Any suggestions? EDIT: I should have said that, but we are using MS Windows clients, Visual Studio development, Tortoise SVN (the latter works fine without a subversion server). And yes, I am strict on "no server software", since all server based solutions I have seen so far seem much to oversized/heavy weighted/too-much-effort-to-be-worth-it. In fact, if no one has a better idea, we are going to use a spreadsheet, but I can't believe there are no ready-made, light weight solutions.

    Read the article

  • Why do you need "extern C" for C++ callbacks to C functions?

    - by Artyom
    Hello, I find such examples in Boost code. namespace boost { namespace { extern "C" void *thread_proxy(void *f) { .... } } // anonymous void thread::thread_start(...) { ... pthread_create(something,0,&thread_proxy,something_else); ... } } // boost Why do you actually need this extern "C"? It is clear that thread_proxy function is private internal and I do not expect that it would be mangled as "thread_proxy" because I actually do not need it mangled at all. In fact in all my code that I had written and that runs on may platforms I never used extern "C" and this had worked as-as with normal functions. Why extern "C" is added? My problem is that extern "C" function pollute global name-space and they do not actually hidden as author expects. This is not duplicate! I'm not talking about mangling and external linkage. It is obvious in this code that external linkage is unwanted! Answer: Calling convention of C and C++ functions are not necessary the same, so you need to create one with C calling convention. See 7.5 (p4) of C++ standard.

    Read the article

  • Can JQuery.Validate plugin prevent submission of an Ajax form

    - by berko
    I am using the JQuery form plugin (http://malsup.com/jquery/form/) to handle the ajax submission of a form. I also have JQuery.Validate (http://docs.jquery.com/Plugins/Validation) plugged in for my client side validation. What I am seeing is that the validation fails when I expect it to however it does not stop the form from submitting. When I was using a traditional form (i.e. non-ajax) the validation failing prevented the form for submitting at all.... which is my desired behaviour. I know that the validation is hooked up correctly as the validation messages still appear after the ajax submit has happened. So what I am I missing that is preventing my desired behaviour? Sample code below.... <form id="searchForm" method="post" action="/User/GetDetails"> <input id="username" name="username" type="text" value="user.name" /> <input id="submit" name="submit" type="submit" value="Search" /> </form> <div id="detailsView"> </div> <script type="text/javascript"> var options = { target: '#detailsView' }; $('#searchForm').ajaxForm(options); $('#searchForm').validate({ rules: { username: {required:true}}, messages: { username: {required:"Username is a required field."}} }); </script>

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

  • struct constructor + function parameter

    - by Oops
    Hi, I am a C++ beginner. I have the following code, the reult is not what I expect. The question is why, resp. what is wrong. For sure, the most of you see it at the first glance. struct Complex { float imag; float real; Complex( float i, float r) { imag = i; real = r; } Complex( float r) { Complex(0, r); } std::string str() { std::ostringstream s; s << "imag: " << imag << " | real: " << real << std::endl; return s.str(); } }; class Complexes { std::vector<Complex> * _complexes; public: Complexes(){ _complexes = new std::vector<Complex>; } void Add( Complex elem ) { _complexes->push_back( elem ); } std::string str( int index ) { std::ostringstream oss; Complex c = _complexes->at(index); oss << c.str(); return oss.str(); } }; int main(){ Complexes * cs = new Complexes(); //cs->Add(123.4f); cs->Add(Complex(123.4f)); std::cout << cs->str(0); return 0; } for now I am interested in the basics of c++ not in the complexnumber theory ;-) it would be nice if the "Add" function does also accept one real (without an extra overloading) instead of only a Complex-object is this possible? many thanks in advance Oops

    Read the article

  • Scope of the c++ using directive

    - by ThomasMcLeod
    From section 7.3.4.2 of the c++11 standard: A using-directive specifies that the names in the nominated namespace can be used in the scope in which the using-directive appears after the using-directive. During unqualified name lookup (3.4.1), the names appear as if they were declared in the nearest enclosing namespace which contains both the using-directive and the nominated namespace. [ Note: In this context, “contains” means “contains directly or indirectly”. —end note ] What do the second and third sentences mean exactly? Please give example. Here is the code I am attempting to understand: namespace A { int i = 7; } namespace B { using namespace A; int i = i + 11; } int main(int argc, char * argv[]) { std::cout << A::i << " " << B::i << std::endl; return 0; } It print "7 7" and not "7 18" as I would expect. Sorry for the typo, the program actually prints "7 11".

    Read the article

  • Java Graphics not displaying on successive function calls, why?

    - by primehunter326
    Hi, I'm making a visualization for a BST implementation (I posted another question about it the other day). I've created a GUI which displays the viewing area and buttons. I've added code to the BST implementation to recursively traverse the tree, the function takes in coordinates along with the Graphics object which are initially passed in by the main GUI class. My idea was that I'd just have this function re-draw the tree after every update (add, delete, etc...), drawing a rectangle over everything first to "refresh" the viewing area. This also means I could alter the BST implementation (i.e by adding a balance operation) and it wouldn't affect the visualization. The issue I'm having is that the draw function only works the first time it is called, after that it doesn't display anything. I guess I don't fully understand how the Graphics object works since it doesn't behave the way I'd expect it to when getting passed/called from different functions. I know the getGraphics function has something to do with it. Relevant code: private void draw(){ Graphics g = vPanel.getGraphics(); tree.drawTree(g,ORIGIN,ORIGIN); } vPanel is what I'm drawing on private void drawTree(Graphics g, BinaryNode<AnyType> n, int x, int y){ if( n != null ){ drawTree(g, n.left, x-10,y+10 ); if(n.selected){ g.setColor(Color.blue); } else{ g.setColor(Color.gray); } g.fillOval(x,y,20,20); g.setColor(Color.black); g.drawString(n.element.toString(),x,y); drawTree(g,n.right, x+10,y+10); } } It is passed the root node when it is called by the public function. Do I have to have: Graphics g = vPanel.getGraphics(); ...within the drawTree function? This doesn't make sense!! Thanks for your help.

    Read the article

  • change postgres date format

    - by Jay
    Is there a way to change the default format of a date in Postgres? Normally when I query a Postgres database, dates come out as yyyy-mm-dd hh:mm:ss+tz, like 2011-02-21 11:30:00-05. But one particular program the dates come out yyyy-mm-dd hh:mm:ss.s, that is, there is no time zone and it shows tenths of a second. Apparently something is changing the default date format, but I don't know what or where. I don't think it's a server-side configuration parameter, because I can access the same database with a different program and I get the format with the timezone. I care because it appears to be ignoring my "set timezone" calls in addition to changing the format. All times come out EST. Additional info: If I write "select somedate from sometable" I get the "no timezone" format. But if I write "select to_char(somedate::timestamptz, 'yyyy-mm-dd hh24:mi:ss-tz')" then timezones work as I would expect. This really sounds to me like something is setting all timestamps to implicitly be "to_char(date::timestamp, 'yyyy-mm-dd hh24:mi:ss.m')". But I can't find anything in the documentation about how I would do this if I wanted to, nor can I find anything in the code that appears to do this. Though as I don't know what to look for, that doesn't prove much.

    Read the article

  • Name for a "naive" timekeeping system?

    - by Robert L
    I am thinking of a "naive" timekeeping system of the sort I believe would be likely to be implemented by non-specialists. A day is exactly 24 hours. An hour is exactly 60 minutes. A minute is exactly 60 seconds. No exceptions (i.e. no Daylight Saving or leap seconds). A leap year occurs exactly once every four years: if the year modulo 4 equals 0, it is a leap year. The month lengths are the normal 31 days for January, 28 or 29 days for February, etc., that you would expect to find on a wall calendar. Days of the week, if they are used, are what you would get by taking your contemporary (late 1900's / early 2000's) wall calendar and, using the above rules for leap years and month lengths, extrapolating in both directions: if the calendar goes far back enough, February 29, 1900 exists and is a Wednesday; and if the calendar goes far forward enough, February 29, 2100 exists and is a Monday. What name, if any, is used to describe precisely this system?

    Read the article

  • Entity Framework 4 Entity with EntityState of Unchanged firing update

    - by Andy
    I am using EF 4, mapping all CUD operations for my entities using sprocs. I have two tables, ADDRESS and PERSON. A PERSON can have multiple ADDRESS associated with them. Here is the code I am running: Person person = (from p in context.People where p.PersonUID == 1 select p).FirstOrDefault(); Address address = (from a in context.Addresses where a.AddressUID == 51 select a).FirstOrDefault(); address.AddressLn2 = "Test"; context.SaveChanges(); The Address being updated is associated with the Person I am retrieveing - although they are not explicitly linked in any way in the code. When the context.SaveChanges() executes not only does the Update sproc for my Address entity get fired (like you would expect), but so does the Update sproc for the Person entity - even though you can see there was no change made to the Person entity. When I check the EntityState of both objects before the context.SaveChanges() call I see that my Address entity has an EntityState of "Modified" and my Person enity has an EntityState of "Unchanged". Why is the Update sproc being called for the Person entity? Is there a setting of some sort that I can set to prevent this from happening?

    Read the article

  • JavaCC: How can I specify which token(s) are expected in certain context?

    - by java.is.for.desktop
    Hello, everyone! I need to make JavaCC aware of a context (current parent token), and depending on that context, expect different token(s) to occur. Consider the following pseudo-code: TOKEN <abc> { "abc*" } // recognizes "abc", "abcd", "abcde", ... TOKEN <abcd> { "abcd*" } // recognizes "abcd", "abcde", "abcdef", ... TOKEN <element1> { "element1" "[" expectOnly(<abc>) "]" } TOKEN <element2> { "element2" "[" expectOnly(<abcd>) "]" } ... So when the generated parser is "inside" a token named "element1" and it encounter "abcdef" it recognizes it as <abc>, but when its "inside" a token named "element2" it recognizes the same string as <abcd>. element1 [ abcdef ] // aha! it can only be <abc> element2 [ abcdef ] // aha! it can only be <abcd> If I'm not wrong, it would behave similar to more complex DTD definitions of an XML file. So, how can one specify, in which "context" which token(s) are valid/expected? NOTE: It would be not enough for my real case to define a kind of "hierarchy" of tokens, so that "abcdef" is always first matched against <abcd> and than <abc>. I really need context-aware tokens.

    Read the article

  • Unicode version of base64 encoding/ decoding

    - by Yan Cheng CHEOK
    I am using base64 encoding/decoding from http://www.adp-gmbh.ch/cpp/common/base64.html It works pretty well with the following code. const std::string s = "I Am A Big Fat Cat" ; std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(s.c_str()), s.length()); std::string decoded = base64_decode(encoded); std::cout << _T("encoded: ") << encoded << std::endl; std::cout << _T("decoded: ") << decoded << std::endl; However, when comes to unicode namespace std { #ifdef _UNICODE typedef wstring tstring; #else typedef string tstring; #endif } const std::tstring s = _T("I Am A Big Fat Cat"); How can I still make use of the above function? Merely changing std::string base64_encode(unsigned TCHAR const* , unsigned int len); std::tstring base64_decode(std::string const& s); will not work correctly. (I expect base64_encode to return ASCII. Hence, std::string should be used instead of std::tstring)

    Read the article

  • Unable to delete inherited entity class in EF4

    - by Coding Gorilla
    I have two entities in an EF4 model (using Model First), let's call them EntityA and EntityB. EntityA is marked as abstract, and EntityB inherits from EntityA. They are similar to the following: public class EntityA { public Guid Id; public string Name; public string Uri; } public class EntityB : EntityA { public string AnotherProperty; } The generated database tables look as I would expect them, with EntityA as on table, and then another table like: EntityA_EntityB Id (PK, FK, uniqueidentifier) AnotherProperty (varchar) There is a foreign key constraint on EntityA_EntityB that references EntityA's Id property, no cascades are configured (although I did try changing these myself). The problem is that when I attempt to do something like: Context.DeleteObject(EntityA_EntityB); EF attempts to delete the EntityA_EntityB table record before deleting the EntityA table record, which of course violates the foreign key constraint on EntityA_EntityB table. Using EFProfiler I see the following commands being sent to the database: delete [dbo].[EntityA_EntityB] where (([Id] = '5c02899f-09ea-2ed9-d44b-01aef80f6b64' /* @0 */) followed by delete [dbo].[EntityA] where ([Id] = '5c02899f-09ea-2ed9-d44b-01aef80f6b64' /* @0 */) I'm completely stumped as to how to get around this problem. I would think the EF should know that it needs to delete the base class first, before deleting the inherited class. I know I could do some triggers or other database type solutions, but I'd rather avoid doing that if I can. All my classes are POCO built using some customized T4 templates. I don't want to paste in a lot of extraneous code, but if you need more information I'll provide what I can.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >