Search Results

Search found 8440 results on 338 pages for 'wms implementation'.

Page 249/338 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • How to select a rectangle from List<Rectangle[]> with Linq

    - by dboarman
    I have a list of DrawObject[]. Each DrawObject has a Rectangle property. Here is my event: List<Canvas.DrawObject[]> matrix; void Control_MouseMove ( object sender, MouseEventArgs e ) { IEnumerable<Canvas.DrawObject> tile = Enumerable.Range( 0, matrix.Capacity - 1) .Where(row => Enumerable.Range(0, matrix[row].Length -1) .Where(column => this[column, row].Rectangle.Contains(e.Location))) .????; } I am not sure exactly what my final select command should be in place of the "????". Also, I was getting an error: cannot convert IEnumerable to bool. I've read several questions about performing a linq query on a list of arrays, but I can't quite get what is going wrong with this. Any help? Edit Apologies for not being clear in my intentions with the implementation. I intend to select the DrawObject that currently contains the mouse location.

    Read the article

  • Detecting UITableView scrolling

    - by Xeph
    Hi I've subclassed UITableView (as KRTableView) and implemented the four touch-based methods (touchesBegan, touchesEnded, touchesMoved, and touchesCancelled) so that I can detect when a touch-based event is being handled on a UITableView. Essentially what I need to detect is when the UITableView is scrolling up or down. However, subclassing UITableView and creating the above methods only detects when scrolling or finger movement is occuring within a UITableViewCell, not on the entire UITableView. As soon as my finger is moved onto the next cell, the touch events don't do anything. This is how I'm subclassing UITableView: #import "KRTableView.h" @implementation KRTableView - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; NSLog(@"touches began..."); } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; NSLog(@"touchesMoved occured"); } - (void)touchesCancelled:(NSSet*)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; NSLog(@"touchesCancelled occured"); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; NSLog(@"A tap was detected on KRTableView"); } @end How can I detect when the UITableView is scrolling up or down?

    Read the article

  • TCL How to read , extract and count the occurent in .txt file (Current Directory)

    - by Passion
    Hi Folks, I am beginner to scripting and vigorously learning TCL for the development of embedded system. I have to Search for the files with only .txt format in the current directory, count the number of cases of each different "Interface # nnnn" string in .txt file, where nnnn is a four or 32 digits max hexadecimal no and o/p of a table of Interface number against occurrence. I am facing implementation issues while writing a script i.e, Unable to implement the data structure like Linked List, Two Dimensional array. I am rewriting a script using multi dimension array (Pass values into the arrays in and out of procedure) in TCL to scan through the every .txt file and search for the the string/regular expression ‘Interface # ’ to count and display the number of occurrences. If someone could help me to complete this part will be much appreciated. Search for only .txt extension files and obtain the size of the file Here is my piece of code for searching a .txt file in present directory set files [glob *.txt] if { [llength $files] > 0 } { puts "Files:" foreach f [lsort $files] { puts " [file size $f] - $f" } } else { puts "(no files)" } I reckon these are all the possible logical steps behind to complete it i) Once searched and find the .txt file then open all .txt files in read only mode ii) Create a array or list using the procedure (proc) Interface number to NULL and Interface count to zero 0 iii) Scan thro the .txt file and search for the string or regular expression "interface # iv) When a match found in .txt file, check the Interface Number and increment the count for the corresponding entry. Else add new element to the Interface Number list v) If there are no files return to the first directory My o/p is like follows Interface Frequency 123f 3 1232 4

    Read the article

  • Behavior of retained property while holder is retained

    - by Aurélien Vallée
    Hello everyone, I am a beginner ObjectiveC programmer, coming from the C++ world. I find it very difficult to understand the memory management offered by NSObject :/ Say I have the following class: @interface User : NSObject { NSString* name; } @property (nonatomic,retain) NSString* name; - (id) initWithName: (NSString*) theName; - (void) release; @end @implementation User @synthesize name - (id) initWithName: (NSString*) theName { if ( self = [super init] ) { [self setName:theName]; } return self; } - (void) release { [name release]; [super release]; } @end No considering the following code, I can't understand the retain count results: NSString* name = [[NSString alloc] initWithCString:/*C string from sqlite3*/]; // (1) name retainCount = 1 User* user = [[User alloc] initWithName:name]; // (2) name retainCount = 2 [whateverMutableArray addObject:user]; // (3) name retainCount = 2 [user release]; // (4) name retainCount = 1 [name release]; // (5) name retainCount = 0 At (4), the retain count of name decreased from 2 to 1. But that's not correct, there is still the instance of user inside the array that points to name ! The retain count of a variable should only decrease when the retain count of a referring variable is 0, that is, when it is dealloced, not released.

    Read the article

  • Elegantly handling constraint violations in EJB/JPA environment?

    - by hallidave
    I'm working with EJB and JPA on a Glassfish v3 app server. I have an Entity class where I'm forcing one of the fields to be unique with a @Column annotation. @Entity public class MyEntity implements Serializable { private String uniqueName; public MyEntity() { } @Column(unique = true, nullable = false) public String getUniqueName() { return uniqueName; } public void setUniqueName(String uniqueName) { this.uniqueName = uniqueName; } } When I try to persist an object with this field set to a non-unique value I get an exception (as expected) when the transaction managed by the EJB container commits. I have two problems I'd like to solve: 1) The exception I get is the unhelpful "javax.ejb.EJBException: Transaction aborted". If I recursively call getCause() enough times, I eventually reach the more useful "java.sql.SQLIntegrityConstraintViolationException", but this exception is part of the EclipseLink implementation and I'm not really comfortable relying on it's existence. Is there a better way to get detailed error information with JPA? 2) The EJB container insists on logging this error even though I catch it and handle it. Is there a better way to handle this error which will stop Glassfish from cluttering up my logs with useless exception information? Thanks.

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • What Scheme Does Ghuloum Use?

    - by Don Wakefield
    I'm trying to work my way through Compilers: Backend to Frontend (and Back to Front Again) by Abdulaziz Ghuloum. It seems abbreviated from what one would expect in a full course/seminar, so I'm trying to fill in the pieces myself. For instance, I have tried to use his testing framework in the R5RS flavor of DrScheme, but it doesn't seem to like the macro stuff: src/ghuloum/tests/tests-driver.scm:6:4: read: illegal use of open square bracket I've read his intro paper on the course, An Incremental Approach to Compiler Construction, which gives a great overview of the techniques used, and mentions a couple of Schemes with features one might want to implement for 'extra credit', but he doesn't mention the Scheme he uses in the course. Update I'm still digging into the original question (investigating options such as Petit Scheme suggested by Eli below), but found an interesting link relating to Gholoum's work, so I am including it here. [Ikarus Scheme](http://en.wikipedia.org/wiki/Ikarus_(Scheme_implementation)) is the actual implementation of Ghuloum's ideas, and appears to have been part of his Ph.D. work. It's supposed to be one of the first implementations of R6RS. I'm trying to install Ikarus now, but the configure script doesn't want to recognize my system's install of libgmp.so, so my problems are still unresolved. Example The following example seems to work in PLT 2.4.2 running in DrEd using the Pretty Big (require lang/plt-pretty-big) (load "/Users/donaldwakefield/ghuloum/tests/tests-driver.scm") (load "/Users/donaldwakefield/ghuloum/tests/tests-1.1-req.scm") (define (emit-program x) (unless (integer? x) (error "---")) (emit " .text") (emit " .globl scheme_entry") (emit " .type scheme_entry, @function") (emit "scheme_entry:") (emit " movl $~s, %eax" x) (emit " ret") ) Attempting to replace the require directive with #lang scheme results in the error message foo.scm:7:3: expand: unbound identifier in module in: emit which appears to be due to a failure to load tests-driver.scm. Attempting to use #lang r6rs disables the REPL, which I'd really like to use, so I'm going to try to continue with Pretty Big. My thanks to Eli Barzilay for his patient help.

    Read the article

  • How should I implement lazy session creation in PHP?

    - by Adam Franco
    By default, PHP's session handling mechanisms set a session cookie header and store a session even if there is no data in the session. If no data is set in the session then I don't want a Set-Cookie header sent to the client in the response and I don't want an empty session record stored on the server. If data is added to $_SESSION, then the normal behavior should continue. My goal is to implement lazy session creation behavior of the sort that Drupal 7 and Pressflow where no session is stored (or session cookie header sent) unless data is added to the $_SESSION array during application execution. The point of this behavior is to allow reverse proxies such as Varnish to cache and serve anonymous traffic while letting authenticated requests pass through to Apache/PHP. Varnish (or another proxy-server) is configured to pass through any requests without cookies, assuming correctly that if a cookie exists then the request is for a particular client. I have ported the session handling code from Pressflow that uses session_set_save_handler() and overrides the implementation of session_write() to check for data in the $_SESSION array before saving and will write this up as library and add an answer here if this is the best/only route to take. My Question: While I can implement a fully custom session_set_save_handler() system, is there an easier way to get this lazy session creation behavior in a relatively generic way that would be transparent to most applications?

    Read the article

  • What is the right approach to checksumming UDP packets

    - by mr.b
    I'm building UDP server application in C#. I've come across a packet checksum problem. As you probably know, each packet should carry some simple way of telling receiver if packet data is intact. Now, UDP already has 2-byte checksum as part of header, which is optional, at least in IPv4 world. Alternative method is to have custom checksum as part of data section in each packet, and to verify it on receiver. My question boils down to: is it better to rely on (optional) checksum in UDP packet header, or to make a custom checksum implementation as part of packet data section? Perhaps the right answer depends on circumstances (as usual), so one circumstance here is that, even though code is written and developed in .NET on Windows, it might have to run under platform-independent Mono.NET, so eventual solution should be compatible with other platforms. I believe that custom checksum algorithm would be easily portable, but I'm not so sure about the first one. Any thoughts? Also, shouts about packet checksumming in general are welcome.

    Read the article

  • Override ActiveRecord#save, Method Alias? Trying to mixin functionality into save method...

    - by viatropos
    Here's the situation: I have a User model, and two modules for authentication: Oauth and Openid. Both of them override ActiveRecord#save, and have a fair share of implementation logic. Given that I can tell when the user is trying to login via Oauth vs. Openid, but that both of them have overridden save, how do "finally" override save such that I can conditionally call one of the modules' implementations of it? Here is the base structure of what I'm describing: module UsesOauth def self.included(base) base.class_eval do def save puts "Saving with Oauth!" end def save_with_oauth save end end end end module UsesOpenid def self.included(base) base.class_eval do def save puts "Saving with OpenID!" end def save_with_openid save end end end end module Sequencer def save if using_oauth? save_with_oauth elsif using_openid? save_with_openid else super end end end class User < ActiveRecord::Base include UsesOauth include UsesOpenid include Sequencer end I was thinking about using alias_method like so, but that got too complicated, because I might have 1 or 2 more similar modules. I also tried using those save_with_oauth methods (shown above), which almost works. The only thing that's missing is that I also need to call ActiveRecord::Base#save (the super method), so something like this: def save_with_oauth # do this and that super.save # the rest end But I'm not allowed to do that in ruby. Any ideas for a clever solution to this?

    Read the article

  • Any book on designing and implementing a CRPG engine?

    - by Fabzter
    Hi! First, let me tell you, I am not really interested in making my own rpg engine (at least not in the near future, hehe), but I do feel like I want to understand the internals of how a rpg engine works. Why? Well, because I like to read about programming and design, It keeps me motivated and excited, and because I know I will learn a lot, for, even when I have been programming for some years now, I never stop considering myself an ignorant... there are simply SO many things involving a game engine (specially rpg ones, like branching storylines, and items and economics!) I'm eager to know. I've been searching (and thus, finding) lots of info online, but it is never focused in what I'm interested (most of it talks about the mathematics and AI algorithms implementation, which I know quite well), which is the design of overall structure, patterns, scripting engine, decision engine... damn, so many things I can't even imagine, since I've never done any game programming. I hope you know have an idea of how I feel, and how I want to learn for the sake of learning, and why would I want you to tell me if you know if there exist books touching the topics that interest me the most.

    Read the article

  • Ordered delivery with NetNamedPipeBinding using oneWay calls

    - by Aseem Bansal
    Is it possible to guarantee ordered delivery with oneWay calls using namedPipe binding? I have a WCF service/client communicating using namedPipe binding. The client is exposing a callback contract in which all the methods in the callback are marked as OneWay. Something like this [ServiceContract(CallbackContract = typeof(IMyServiceCallback))] public interface IMyService { [OperationContract] void MyOperation(); } public interface IMyServiceCallback { [OperationContract(IsOneWay=true)] void MyCallback1(); [OperationContract(IsOneWay=true)] void MyCallback2(); } At the server side, the implementation of MyOperation method always calls MyCallback1 first and then MyCallback2 but I am observing that sometimes the client receives the calls in the incorrect order (MyCallback2 first and then MyCallback1). On searching the internet I found that the order is not guaranteed with oneway operation as mentioned here and also there is something called reliableSession which ensure message ordering. All the examples on the internet for reliable session are with TCP binding (and not a single one with NamedPipeBinding) and the tcpBinding also has a property called ReliableSession which is not present on the NetNamedPipeBinding. So I am not sure whether reliable session is expected to work with NetNamedPipeBinding or not. Question: Does reliable session work with namedPipeBinding? If yes, how? If no, Is there any other approach with which I can guarantee ordered delivery?

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point.

    Read the article

  • How to trace the connection pool in a Java Web application - DBMS_APPLICATION_INFO

    - by Cleiton Garcia
    Hello, I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database. 5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code: DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "',''); Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION. Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem? I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM. http://stackoverflow.com/questions/53379/using-dbmsapplicationinfo-with-jboss My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server. Thanks.

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • How to count term frequency for set of documents?

    - by ManBugra
    i have a Lucene-Index with following documents: doc1 := { caldari, jita, shield, planet } doc2 := { gallente, dodixie, armor, planet } doc3 := { amarr, laser, armor, planet } doc4 := { minmatar, rens, space } doc5 := { jove, space, secret, planet } so these 5 documents use 14 different terms: [ caldari, jita, shield, planet, gallente, dodixie, armor, amarr, laser, minmatar, rens, jove, space, secret ] the frequency of each term: [ 1, 1, 1, 4, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1 ] for easy reading: [ caldari:1, jita:1, shield:1, planet:4, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:1, rens:1, jove:1, space:2, secret:1 ] What i do want to know now is, how to obtain the term frequency vector for a set of documents? for example: Set<Documents> docs := [ doc2, doc3 ] termFrequencies = magicFunction(docs); System.out.pring( termFrequencies ); would result in the ouput: [ caldari:0, jita:0, shield:0, planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:0, rens:0, jove:0, space:0, secret:0 ] remove all zeros: [ planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1 ] Notice, that the result vetor contains only the term frequencies of the set of documents. NOT the overall frequencies of the whole index! The term 'planet' is present 4 times in the whole index but the source set of documents only contains it 2 times. A naive implementation would be to just iterate over all documents in the docs set, create a map and count each term. But i need a solution that would also work with a document set size of 100.000 or 500.000. Is there a feature in Lucene i can use to obtain this term vector? If there is no such feature, how would a data structure look like someone can create at index time to obtain such a term vector easily and fast? I'm not that Lucene expert so i'am sorry if the solution is obvious or trivial.

    Read the article

  • Resolving Circular References for Objects Implementing ISerializable

    - by Chris
    I'm writing my own IFormatter implementation and I cannot think of a way to resolve circular references between two types that both implement ISerializable. Here's the usual pattern: [Serializable] class Foo : ISerializable { private Bar m_bar; public Foo(Bar bar) { m_bar = bar; m_bar.Foo = this; } public Bar Bar { get { return m_bar; } } protected Foo(SerializationInfo info, StreamingContext context) { m_bar = (Bar)info.GetValue("1", typeof(Bar)); } public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("1", m_bar); } } [Serializable] class Bar : ISerializable { private Foo m_foo; public Foo Foo { get { return m_foo; } set { m_foo = value; } } public Bar() { } protected Bar(SerializationInfo info, StreamingContext context) { m_foo = (Foo)info.GetValue("1", typeof(Foo)); } public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("1", m_foo); } } I then do this: Bar b = new Bar(); Foo f = new Foo(b); bool equal = ReferenceEquals(b, b.Foo.Bar); // true // Serialise and deserialise b equal = ReferenceEquals(b, b.Foo.Bar); If I use an out-of-the-box BinaryFormatter to serialise and deserialise b, the above test for reference-equality returns true as one would expect. But I cannot conceive of a way to achieve this in my custom IFormatter. In a non-ISerializable situation I can simply revisit "pending" object fields using reflection once the target references have been resolved. But for objects implementing ISerializable it is not possible to inject new data using SerializationInfo. Can anyone point me in the right direction?

    Read the article

  • Use Objective-C without NSObject?

    - by Alex I
    I am testing some simple Objective-C code on Windows (cygwin, gcc). This code already works in Xcode on Mac. I would like to convert my objects to not subclass NSObject (or anything else, lol). Is this possible, and how? What I have so far: // MyObject.h @interface MyObject - (void)myMethod:(int) param; @end and // MyObject.m #include "MyObject.h" @interface MyObject() { // this line is a syntax error, why? int _field; } @end @implementation MyObject - (id)init { // what goes in here? return self; } - (void)myMethod:(int) param { _field = param; } @end What happens when I try compiling it: gcc -o test MyObject.m -lobjc MyObject.m:4:1: error: expected identifier or ‘(’ before ‘{’ token MyObject.m: In function ‘-[MyObject myMethod:]’: MyObject.m:17:3: error: ‘_field’ undeclared (first use in this function) EDIT My compiler is cygwin's gcc, also has cygwin gcc-objc package: gcc --version gcc (GCC) 4.7.3 I have tried looking for this online and in a couple of Objective-C tutorials, but every example of a class I have found inherits from NSObject. Is it really impossible to write Objective-C without Cocoa or some kind of Cocoa replacement that provides NSObject? (Yes, I know about GNUstep. I would really rather avoid that if possible...)

    Read the article

  • Using CallExternalMethodActivity/HandleExternalEventActivity in StateMachine

    - by AngrySpade
    I'm attempting to make a StateMachine execute some database action between states. So I have a "starting" state that uses CallExternalMethodActivity to call a "BeginExecuteNonQuery" function on an class decorated with ExternalDataExchangeAttribute. After that it uses a SetStateActivity to change to an "ending" state. The "ending" state uses a HandleExternalEventActivity to listen to a "EndExecuteNonQuery" event. I can step through the local service, into the "BeginExecuteNonQuery" function. The problem is that the "EndExecuteNonQuery" is null. public class FailoverWorkflowController : IFailoverWorkflowController { private readonly WorkflowRuntime workflowRuntime; private readonly FailoverWorkflowControlService failoverWorkflowControlService; private readonly DatabaseControlService databaseControlService; public FailoverWorkflowController() { workflowRuntime = new WorkflowRuntime(); workflowRuntime.WorkflowCompleted += workflowRuntime_WorkflowCompleted; workflowRuntime.WorkflowTerminated += workflowRuntime_WorkflowTerminated; ExternalDataExchangeService dataExchangeService = new ExternalDataExchangeService(); workflowRuntime.AddService(dataExchangeService); databaseControlService = new DatabaseControlService(); workflowRuntime.AddService(databaseControlService); workflowRuntime.StartRuntime(); } ... } ... public void BeginExecuteNonQuery(string command) { Guid workflowInstanceID = WorkflowEnvironment.WorkflowInstanceId; ThreadPool.QueueUserWorkItem(delegate(object state) { try { int result = ExecuteNonQuery((string)state); EndExecuteNonQuery(null, new ExecuteNonQueryResultEventArgs(workflowInstanceID, result)); } catch (Exception exception) { EndExecuteNonQuery(null, new ExecuteNonQueryResultEventArgs(workflowInstanceID, exception)); } }, command); } What am I doing wrong with my implementation? -Stan

    Read the article

  • Create dynamic factory method in PHP (< 5.3)

    - by fireeyedboy
    How would one typically create a dynamic factory method in PHP? By dynamic factory method, I mean a factory method that will autodiscover what objects there are to create, based on some aspect of the given argument. Preferably without registering them first with the factory either. I'm OK with having the possible objects be placed in one common place (a directory) though. I want to avoid your typical switch statement in the factory method, such as this: public static function factory( $someObject ) { $className = get_class( $someObject ); switch( $className ) { case 'Foo': return new FooRelatedObject(); break; case 'Bar': return new BarRelatedObject(); break; // etc... } } My specific case deals with the factory creating a voting repository based on the item to vote for. The items all implement a Voteable interface. Something like this: Default_User implements Voteable ... Default_Comment implements Voteable ... Default_Event implements Voteable ... Default_VoteRepositoryFactory { public static function factory( Voteable $item ) { // autodiscover what type of repository this item needs // for instance, Default_User needs a Default_VoteRepository_User // etc... return new Default_VoteRepository_OfSomeType(); } } I want to be able to drop in new Voteable items and Vote repositories for these items, without touching the implementation of the factory.

    Read the article

  • WCF Callback Contract InvalidOperationException: Collection has been modified

    - by mrlane
    We are using a WCF service with a Callback contract. Communication is Asynchronous. The service contract is defined as: [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(ISessionClient),SessionMode = SessionMode.Allowed)] public interface ISessionService With a method: [OperationContract(IsOneWay = true)] void Send(Message message); The callback contract is defined as [ServiceContract] public interface ISessionClient With methods: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginSend(Message message, AsyncCallback callback, object state); void EndSend(IAsyncResult result); The implementation of BeginSend and EndSend in the callback channel are as follows: public void Send(ActionMessage actionMessage) { Message message = Message.CreateMessage(_messageVersion, CommsSettings.SOAPActionReceive, actionMessage, _messageSerializer); lock (LO_backChannel) { try { _backChannel.BeginSend(message, OnSendComplete, null); } catch (Exception ex) { _hasFaulted = true; } } } private void OnSendComplete(IAsyncResult asyncResult) { lock (LO_backChannel) { try { _backChannel.EndSend(asyncResult); } catch (Exception ex) { _hasFaulted = true; } } } We are getting an InvalidOperationException: "Collection has been modified" on _backChannel.EndSend(asyncResult) seemingly randomly, and we are really out of ideas about what is causing this. I understand what the exception means, and that concurrency issues are a common cause of such exceptions (hence the locks), but it really doesn't make any sense to me in this situation. The clients of our service are Silverlight 3.0 clients using PollingDuplexHttpBinding which is the only binding available for Silverlight. We have been running fine for ages, but recently have been doing a lot of data binding, and this is when the issues started. Any help with this is appreciated as I am personally stumped at this time.

    Read the article

  • Spring and hibernate.cfg.xml

    - by Steve Kuo
    How do I get Spring to load Hibernate's properties from hibernate.cfg.xml? We're using Spring and JPA (with Hibernate as the implementation). Spring's applicationContext.xml specifies the JPA dialect and Hibernate properties: <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="jpaDialect"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" /> </property> <property name="jpaProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</prop> </props> </property> </bean> In this configuration, Spring is reading all the Hibernate properties via applicationContext.xml . When I create a hibernate.cfg.xml (located at the root of my classpath, the same level as META-INF), Hibernate doesn't read it at all (it's completely ignored). What I'm trying to do is configure Hibernate second level cache by inserting the cache properties in hibernate.cfg.xml: <cache usage="transactional|read-write|nonstrict-read-write|read-only" region="RegionName" include="all|non-lazy" />

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >