Search Results

Search found 4882 results on 196 pages for 'odd behaviour'.

Page 178/196 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • unformatted input to a std::string instead of c-string from binary file.

    - by posop
    ok i have this program working using c-strings. I am wondering if it is possible to read in blocks of unformatted text to a std::string? I toyed arround with if >> but this reads in line by line. I've been breaking my code and banging my head against the wall trying to use std::string, so I thought it was time to enlist the experts. Here's a working program you need to supply a file "a.txt" with some content to make it run. i tried to fool around with: in.read (const_cast<char *>(memblock.c_str()), read_size); but it was acting odd. I had to do std::cout << memblock.c_str() to get it to print. and memblock.clear() did not clear out the string. anyway, if you can think of a way to use STL I would greatly appreciate it. Here's my program using c-strings // What this program does now: copies a file to a new location byte by byte // What this program is going to do: get small blocks of a file and encrypt them #include <fstream> #include <iostream> #include <string> int main (int argc, char * argv[]) { int read_size = 16; int infile_size; std::ifstream in; std::ofstream out; char * memblock; int completed = 0; memblock = new char [read_size]; in.open ("a.txt", std::ios::in | std::ios::binary | std::ios::ate); if (in.is_open()) infile_size = in.tellg(); out.open("b.txt", std::ios::out | std::ios::trunc | std::ios::binary); in.seekg (0, std::ios::beg);// get to beginning of file while(!in.eof()) { completed = completed + read_size; if(completed < infile_size) { in.read (memblock, read_size); out.write (memblock, read_size); } // end if else // last run { delete[] memblock; memblock = new char [infile_size % read_size]; in.read (memblock, infile_size % read_size + 1); out.write (memblock, infile_size % read_size ); } // end else } // end while } // main if you see anything that would make this code better please feel free to let me know.

    Read the article

  • Raphael Scope Drag n drop multiple paper instances

    - by donald
    I have two Raphael paper instances. In both I want to drag and drop an element (circle). It is important for me to assign both these circles the same id. I expected no problem, as both are in different paper instances and therefore in different scope. What happens is, that somehow both elements react, when I have clicked both elements at least once. If I however give these elements different IDs everything works fine (each element only calls its "start", "drag" and "up" function if draged around). Is this intended behaviour of Raphael and do I have to assign different IDs to the elements in the different paper instances? Hopefully not and you can point me to the right direction :-) Thanks a lot for your Help in advance, Here comes the code: <!doctype html> <html> <head> <meta charset="utf-8" /> <title>DragNDrop</title> <script src="raphael-min.js"></script> </head> <body> <h1>Paper1</h1> <div id="divPaper1" style ="height: 150px; width: 300px; border:thin solid red"></div> <h1>Paper2</h1> <div id="divPaper2" style ="height: 150px; width: 300px; border:thin solid red"></div> <script> start1 = function () { console.log("start1"); } drag1 = function () { console.log("move1"); } up1 = function () { console.log("up1"); } start2 = function () { console.log("start2"); } drag2 = function () { console.log("move2"); } up2 = function () { console.log("up2"); } var paper1 = Raphael("divPaper1", "100%", "100%"); var circle1 = paper1.circle(40, 40, 30); circle1.attr("fill", "yellow"); circle1.id = "circle"; //both circles get the same id circle1.drag(drag1, start1, up1); paper2 = Raphael("divPaper2", "100%", "100%"); var circle2 = paper2.circle(40, 40, 30); circle2.attr("fill", "red"); circle2.id = "circle"; //both circles get the same id circle2.drag(drag2, start2, up2); </script> </body>

    Read the article

  • Linked List Design

    - by Jim Scott
    The other day in a local .NET group I attend the following question came up: "Is it a valid interview question to ask about Linked Lists when hiring someone for a .NET development position?" Not having a computer sciense degree and being a self taught developer my response was that I did not feel it was appropriate as I in 5 years of developer with .NET had never been exposed to linked lists and did not hear any compeling reason for a use for one. However the person commented that it is a very common interview question so I decided when I left that I would do some reasearch on linked lists and see what I might be missing. I have read a number of posts on stack overflow and various google searches and decided the best way to learn about them was to write my own .NET classes to see how they worked from the inside out. Here is my class structure Single Linked List Constructor public SingleLinkedList(object value) Public Properties public bool IsTail public bool IsHead public object Value public int Index public int Count private fields not exposed to a property private SingleNode firstNode; private SingleNode lastNode; private SingleNode currentNode; Methods public void MoveToFirst() public void MoveToLast() public void Next() public void MoveTo(int index) public void Add(object value) public void InsertAt(int index, object value) public void Remove(object value) public void RemoveAt(int index) Questions I have: What are typical methods you would expect in a linked list? What is typical behaviour when adding new records? For example if I have 4 nodes and I am currently positioned in the second node and perform Add() should it be added after or before the current node? Or should it be added to the end of the list? Some of the designs I have seen explaining things seem to expose outside of the LinkedList class the Node object. In my design you simply add, get, remove values and know nothing about any node object. Should the Head and Tail be placeholder objects that are only used to define the head/tail of the list? I require my Linked List be instantiated with a value which creates the first node of the list which is essentially the head and tail of the list. Would you change that ? What should the rules be when it comes to removing nodes. Should someone be able to remove all nodes? Here is my Double Linked List Constructor public DoubleLinkedList(object value) Properties public bool IsHead public bool IsTail public object Value public int Index public int Count Private fields not exposed via property private DoubleNode currentNode; Methods public void AddFirst(object value) public void AddLast(object value) public void AddBefore(object existingValue, object value) public void AddAfter(object existingValue, object value) public void Add(int index, object value) public void Add(object value) public void Remove(int index) public void Next() public void Previous() public void MoveTo(int index)

    Read the article

  • How to get this Qt state machine to work?

    - by Ton van den Heuvel
    I have two widgets that can be checked, and a numeric entry field that should contain a value greater than zero. Whenever both widgets have been checked, and the numeric entry field contains a value greater than zero, a button should be enabled. I am struggling with defining a proper state machine for this situation. So far I have the following: QStateMachine *machine = new QStateMachine(this); QState *buttonDisabled = new QState(QState::ParallelStates); buttonDisabled->assignProperty(ui_->button, "enabled", false); QState *a = new QState(buttonDisabled); QState *aUnchecked = new QState(a); QFinalState *aChecked = new QFinalState(a); aUnchecked->addTransition(wa, SIGNAL(checked()), aChecked); a->setInitialState(aUnchecked); QState *b = new QState(buttonDisabled); QState *bUnchecked = new QState(b); QFinalState *bChecked = new QFinalState(b); employeeUnchecked->addTransition(wb, SIGNAL(checked()), bChecked); b->setInitialState(bUnchecked); QState *weight = new QState(registerButtonDisabled); QState *weightZero = new QState(weight); QFinalState *weightGreaterThanZero = new QFinalState(weight); weightZero->addTransition(this, SIGNAL(validWeight()), weightGreaterThanZero); weight->setInitialState(weightZero); QState *buttonEnabled = new QState(); buttonEnabled->assignProperty(ui_->registerButton, "enabled", true); buttonDisabled->addTransition(buttonDisabled, SIGNAL(finished()), buttonEnabled); buttonEnabled->addTransition(this, SIGNAL(invalidWeight()), weightZero); machine->addState(registerButtonDisabled); machine->addState(registerButtonEnabled); machine->setInitialState(registerButtonDisabled); machine->start(); The problem here is that the following transition: buttonEnabled->addTransition(this, SIGNAL(invalidWeight()), weightZero); causes all the child states in the registerButtonDisabled state to be reverted to their initial state. This is unwanted behaviour, as I want the a and b states to remain in the same state. How do I prevent this?

    Read the article

  • How does session_start lock in PHP?

    - by Morgan Cheng
    Originally, I just want to verify that session_start locks on session. So, I create a PHP file as below. Basically, if the pageview is even, the page sleeps for 10 seconds; if the pageview is odd, it doesn't. And, session_start is used to obtain the page view in $_SESSION. I tried to access the page in two tabs of one browser. It is not surprising that the first tab takes 10 seconds since I explicitly let it sleep. The second tab would not sleep, but it should be blocked by sessiont_start. That works as expected. To my surprise, the output of the second page shows that session_start takes almost no time. Actually, the whole page seems takes no time to load. But, the page does take 10 seconds to show in browser. obtained lock Cost time: 0.00016689300537109 Start 1269739162.1997 End 1269739162.1998 allover time elpased : 0.00032305717468262 The page views: 101 Does PHP extract session_start out of PHP page and execute it before other PHP statements? This is the code. <?php function float_time() { list($usec, $sec) = explode(' ', microtime()); return (float)$sec + (float)$usec; } $allover_start_time = float_time(); $start_time = float_time(); session_start(); echo "obtained lock<br/>"; $end_time = float_time(); $elapsed_time = $end_time - $start_time; echo "Cost time: $elapsed_time <br>"; echo "Start $start_time<br/>"; echo "End $end_time<br/>"; ob_flush(); flush(); if (isset($_SESSION['views'])) { $_SESSION['views'] += 1; } else { $_SESSION['views'] = 0; } if ($_SESSION['views'] % 2 == 0) { echo "sleep 10 seconds<br/>"; sleep(10); } $allover_end_time = float_time(); echo "allover time elpased : " . ($allover_end_time - $allover_start_time) . "<br/>"; echo "The page views: " . $_SESSION['views']; ?>

    Read the article

  • Binding CoreData Managed Object to NSTextFieldCell subclass

    - by ndg
    I have an NSTableView which has its first column set to contain a custom NSTextFieldCell. My custom NSTextFieldCell needs to allow the user to edit a "desc" property within my Managed Object but to also display an "info" string that it contains (which is not editable). To achieve this, I followed this tutorial. In a nutshell, the tutorial suggests editing your Managed Objects generated subclass to create and pass a dictionary of its contents to your NSTableColumn via bindings. This works well for read-only NSCell implementations, but I'm looking to subclass NSTextFieldCell to allow the user to edit the "desc" property of my Managed Object. To do this, I followed one of the articles comments, which suggests subclassing NSFormatter to explicitly state which Managed Object property you would like the NSTextFieldCell to edit. Here's the suggested implementation: @implementation TRTableDescFormatter - (BOOL)getObjectValue:(id *)anObject forString:(NSString *)string errorDescription:(NSString **)error { if (anObject != nil){ *anObject = [NSDictionary dictionaryWithObject:string forKey:@"desc"]; return YES; } return NO; } - (NSString *)stringForObjectValue:(id)anObject { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; return [anObject valueForKey:@"desc"]; } - (NSAttributedString*)attributedStringForObjectValue:(id)anObject withDefaultAttributes:(NSDictionary *)attrs { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; NSAttributedString *anAttributedString = [[NSAttributedString alloc] initWithString: [anObject valueForKey:@"desc"]]; return anAttributedString; } @end I assign the NSFormatter subclass to my cell in my NSTextFieldCell subclass, like so: - (void)awakeFromNib { TRTableDescFormatter *formatter = [[[TRTableDescFormatter alloc] init] autorelease]; [self setFormatter:formatter]; } This seems to work, but falls down when editing multiple rows. The behaviour I'm seeing is that editing a row will work as expected until you try to edit another row. Upon editing another row, all previously edited rows will have their "desc" value set to the value of the currently selected row. I've been doing a lot of reading on this subject and would really like to get to the bottom of this. What's more frustrating is that my NSTextFieldCell is rendering exactly how I would like it to. This editing issue is my last obstacle! If anyone can help, that would be greatly appreciated.

    Read the article

  • _heapwalk reports _HEAPBADNODE, causes breakpoint or loops endlessly

    - by Stefan Hubert
    I use _heapwalk to gather statistics about the Process' standard heap. Under certain circumstances i observe unexpected behaviours like: _HEAPBADNODE is returned some breakpoint is triggered inside _heapwalk, telling me the heap might got corrupted access violation inside _heapWalk. I saw different behaviours on different Computers. On one Windows XP 32 bit machine everything looked fine, whereas on two Windows XP 64 bit machines i saw the mentioned symptoms. I saw this behaviour only if LowFragmentationHeap was enabled. I played around a bit. I walked the heap several times right one after another inside my program. First time doing nothing in between the subsequent calls to _heapWalk (everything fine). Then again, this time doing some stuff (for gathering statistics) in between two subsequent calls to _heapWalk. Depending upon what I did there, I sometimes got the described symptoms. Here finally a question: What exactly is safe and what is not safe to do in between two subsequent calls to _heapWalk during a complete heap walk run? Naturally, i shall not manipulate the heap. Therefore i doublechecked that i don't call new and delete. However, my observation is that function calls with some parameter passing causes my heap walk run to fail already. I subsequently added function calls and increasing number of parameters passed to these. My feeling was two function calls with two paramters being passed did not work anymore. However I would like to know why. Any ideas why this does not happen on some machines? Any ideas why this only happens if LowFragmentationHeap is enabled? Sample Code finally: #include <malloc.h> void staticMethodB( int a, int b ) { } void staticMethodA( int a, int b, int c) { staticMethodB( 3, 6); return; } ... _HEAPINFO hinfo; hinfo._pentry = NULL; while( ( heapstatus = _heapwalk( &hinfo ) ) == _HEAPOK ) { //doing nothing here works fine //however if i call functions here with parameters, this causes //_HEAPBADNODE or something else staticMethodA( 3,4,5); } switch( heapstatus ) { ... case _HEAPBADNODE: assert( false ); /*ERROR - bad node in heap */ break; ...

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • How does MySQL's ORDER BY RAND() work?

    - by Eugene
    Hi, I've been doing some research and testing on how to do fast random selection in MySQL. In the process I've faced some unexpected results and now I am not fully sure I know how ORDER BY RAND() really works. I always thought that when you do ORDER BY RAND() on the table, MySQL adds a new column to the table which is filled with random values, then it sorts data by that column and then e.g. you take the above value which got there randomly. I've done lots of googling and testing and finally found that the query Jay offers in his blog is indeed the fastest solution: SELECT * FROM Table T JOIN (SELECT CEIL(MAX(ID)*RAND()) AS ID FROM Table) AS x ON T.ID >= x.ID LIMIT 1; While common ORDER BY RAND() takes 30-40 seconds on my test table, his query does the work in 0.1 seconds. He explains how this functions in the blog so I'll just skip this and finally move to the odd thing. My table is a common table with a PRIMARY KEY id and other non-indexed stuff like username, age, etc. Here's the thing I am struggling to explain SELECT * FROM table ORDER BY RAND() LIMIT 1; /*30-40 seconds*/ SELECT id FROM table ORDER BY RAND() LIMIT 1; /*0.25 seconds*/ SELECT id, username FROM table ORDER BY RAND() LIMIT 1; /*90 seconds*/ I was sort of expecting to see approximately the same time for all three queries since I am always sorting on a single column. But for some reason this didn't happen. Please let me know if you any ideas about this. I have a project where I need to do fast ORDER BY RAND() and personally I would prefer to use SELECT id FROM table ORDER BY RAND() LIMIT 1; SELECT * FROM table WHERE id=ID_FROM_PREVIOUS_QUERY LIMIT 1; which, yes, is slower than Jay's method, however it is smaller and easier to understand. My queries are rather big ones with several JOINs and with WHERE clause and while Jay's method still works, the query grows really big and complex because I need to use all the JOINs and WHERE in the JOINed (called x in his query) sub request. Thanks for your time!

    Read the article

  • UINavigationBar unresponsive after canceling a UITableView search in nav controller in tab bar in a popover

    - by Mark
    Ok, this is an odd one and I can reproduce it with a new project easily. Here is the setup: I have a UISplitViewController. In the left side I have a UITabBarController. In this tab bar controller I have two UINavigationControllers. In the navigation controllers I have UITableViewControllers. These table views have search bars on them. Ok, what happens with this setup is that if I'm in portrait mode and bring up this view in the popover and I start a search in one of the table views and cancel it, the navigation bar becomes unresponsive. That is, the "back" button as well as the right side button cannot be clicked. If I do the exact same thing in landscape mode so we are not in a popover, this doesn't happen. The navigation bar stays responsive. So, the problem only seems to happen inside a popover. I've also noticed that if I do the search but click on an item in the search results which ends up loading something into the "detail view" of the split view and dismissing the popover, and then come back to the popover and then click the Cancel button for the search, the navigation bar is responsive. My application is a universal app and uses the same tab bar controller in the iPhone interface and it works there without this issue. As I mentioned above, I can easily reproduce this with a new project. Here are the steps if you want to try it out yourself: start new project - split view create new UITableViewController class (i named TableViewController) uncomment out the viewDidLoad method as well as the rightBarButtonItem line in viewDidLoad (so we will have an Edit button in the navigation bar) enter any values you want to return from numberOfSectioinsInTableView and numberOfRowsInSection methods open MainWindow.xib and do the following: please note that you will need to be viewing the xib in the middle "view mode" so you can expand the contents of the items drag a Tab Bar Controller into the xib to replace the Navigation Controller item drag a Navigation Controller into the xib as another item under the Tab Bar Controller delete the other two view controllers that are under the Tab Bar Controller (so, now our tab bar has just the one navigation controller on it) inside the navigation controller, drag in a Table View Controller and use it to replace the View Controller (Root View Controller) change the class of the new Table View Controller to the class created above (TableViewController for me) double-click on the Table View under the new Table View Controller to open it up (will be displayed in the tab bar inside the split view controller) drag a "Search Bar and Search Display" onto the table view save the xib run the project in simulator while in portrait mode, click on the Root List button to bring up popover notice the Edit button is clickable click in the Search box - we go into search mode click the Cancel button to exit search mode notice the Edit button no longer works So, can anyone help me figure out why this is happening? Thanks, Mark

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • References between Spring beans when using a NameSpaceHandler

    - by teabot
    I'm trying to use a Spring context namespace to build some existing configuration objects in an application. I have defined a context and pretty much have if working satisfactorily - however, I'd like one bean defined by my namespace to implicitly reference another: Consider the class named 'Node': public Class Node { private String aField; private Node nextNode; public Node(String aField, Node nextNode) { ... } Now in my Spring context I have something like so: <myns:container> <myns:node aField="nodeOne"/> <myns:node aField="nodeTwo"/> </myns:container> Now I'd like nodeOne.getNode() == nodeTwo to be true. So that nodeOne.getNode() and nodeTwo refer to the same bean instance. These are pretty much the relevant parts I have in my AbstractBeanDefinitionParser: public AbstractBeanDefinition parseInternal(Element element, ParserContext parserContext) { ... BeanDefinitionBuilder containerFactory = BeanDefinitionBuilder.rootBeanDefinition(ContainerFactoryBean.class); List<BeanDefinition> containerNodes = Lists.newArrayList(); String previousNodeBeanName; // iterate backwards over the 'node' elements for (int i = nodeElements.size() - 1; i >= 0; --i) { BeanDefinitionBuilder node = BeanDefinitionBuilder.rootBeanDefinition(Node.class); node.setScope(BeanDefinition.SCOPE_SINGLETON); String nodeField = nodeElements.getAttribute("aField"); node.addConstructorArgValue(nodeField); if (previousNodeBeanName != null) { node.addConstructorArgValue(new RuntimeBeanReference(previousNodeBeanName)); } else { node.addConstructorArgValue(null); } BeanDefinition nodeDefinition = node.getBeanDefinition(); previousNodeBeanName = "inner-node-" + nodeField; parserContext.getRegistry().registerBeanDefinition(previousNodeBeanName, nodeDefinition); containerNodes.add(node); } containerFactory.addPropertyValue("nodes", containerNodes); } When the application context is created my Node instances are created and recognized as singletons. Furthermore, the nextNode property is populated with a Node instance with the previous nodes configuration - however, it isn't the same instance. If I output a log message in Node's constructor I see two instances created for each node bean definition. I can think of a few workarounds myself but I'm keen to use the existing model. So can anyone tell me how I can pass these runtime bean references so that I get the correct singleton behaviour for my Node instances?

    Read the article

  • Unit finalization order for application, compiled with run-time packages?

    - by Alexander
    I need to execute my code after finalization of SysUtils unit. I've placed my code in separate unit and included it first in uses clause of dpr-file, like this: project Project1; uses MyUnit, // <- my separate unit SysUtils, Classes, SomeOtherUnits; procedure Test; begin // end; begin SetProc(Test); end. MyUnit looks like this: unit MyUnit; interface procedure SetProc(AProc: TProcedure); implementation var Test: TProcedure; procedure SetProc(AProc: TProcedure); begin Test := AProc; end; initialization finalization Test; end. Note that MyUnit doesn't have any uses. This is usual Windows exe, no console, without forms and compiled with default run-time packages. MyUnit is not part of any package (but I've tried to use it from package too). I expect that finalization section of MyUnit will be executed after finalization section of SysUtils. This is what Delphi's help tells me. However, this is not always the case. I have 2 test apps, which differs a bit by code in Test routine/dpr-file and units, listed in uses. MyUnit, however, is listed first in all cases. One application is run as expected: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization - MyUnit's finalization - ...other units... But the second is not. MyUnit's finalization is invoked before SysUtils's finalization. The actual call chain looks like this: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization (skipped) - MyUnit's finalization - ...other units... - SysUtils's finalization (executed) Both projects have very similar settings. I tried a lot to remove/minimize their differences, but I still do not see a reason for this behaviour. I've tried to debug this and found out that: it seems that every unit have some kind of reference counting. And it seems that InitTable contains multiply references to the same unit. When SysUtils's finalization section is called first time - it change reference counter and do nothing. Then MyUnit's finalization is executed. And then SysUtils is called again, but this time ref-count reaches zero and finalization section is executed: Finalization: // SysUtils' finalization 5003B3F0 55 push ebp // here and below is some form of stub 5003B3F1 8BEC mov ebp,esp 5003B3F3 33C0 xor eax,eax 5003B3F5 55 push ebp 5003B3F6 688EB50350 push $5003b58e 5003B3FB 64FF30 push dword ptr fs:[eax] 5003B3FE 648920 mov fs:[eax],esp 5003B401 FF05DCAD1150 inc dword ptr [$5011addc] // here: some sort of reference counter 5003B407 0F8573010000 jnz $5003b580 // <- this jump skips execution of finalization for first call 5003B40D B8CC4D0350 mov eax,$50034dcc // here and below is actual SysUtils' finalization section ... Can anyone can shred light on this issue? Am I missing something?

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • About fork system call and global variables

    - by lurks
    I have this program in C++ that forks two new processes: #include <pthread.h> #include <iostream> #include <unistd.h> #include <sys/types.h> #include <sys/wait.h> #include <cstdlib> using namespace std; int shared; void func(){ extern int shared; for (int i=0; i<10;i++) shared++; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } int main(){ extern int shared; pid_t p1,p2; int status; shared=0; if ((p1=fork())==0) {func();exit(0);}; if ((p2=fork())==0) {func();exit(0);}; for(int i=0;i<10;i++) shared++; waitpid(p1,&status,0); waitpid(p2,&status,0);; cout<<"shared variable is: "<<shared<<endl; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } The two forked processes make an increment on the shared variables and the parent process does the same. As the variable belongs to the data segment of each process, the final value is 10 because the increment is independent. However, the memory address of the shared variables is the same, you can try compiling and watching the output of the program. How can that be explained ? I cannot understand that, I thought I knew how the fork() works, but this seems very odd.. I need an explanation on why the address is the same, although they are separate variables.

    Read the article

  • WordPress Write Cache Problem with Multiple Sessions

    - by Volomike
    I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again. However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass. The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal. But what it's doing is letting those other sessions in. To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule. I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.

    Read the article

  • Should I deal with files longer than MAX_PATH?

    - by John
    Just had an interesting case. My software reported back a failure caused by a path being longer than MAX_PATH. The path was just a plain old document in My Documents, e.g.: C:\Documents and Settings\Bill\Some Stupid FOlder Name\A really ridiculously long file thats really very very very..........very long.pdf Total length 269 characters (MAX_PATH==260). The user wasn't using a external hard drive or anything like that. This was a file on an Windows managed drive. So my question is this. Should I care? I'm not saying can I deal with the long paths, I'm asking should I. Yes I'm aware of the "\?\" unicode hack on some Win32 APIs, but it seems this hack is not without risk (as it's changing the behaviour of the way the APIs parse paths) and also isn't supported by all APIs . So anyway, let me just state my position/assertions: First presumably the only way the user was able to break this limit is if the app she used uses the special Unicode hack. It's a PDF file, so maybe the PDF tool she used uses this hack. I tried to reproduce this (by using the unicode hack) and experimented. What I found was that although the file appears in Explorer, I can do nothing with it. I can't open it, I can't choose "Properties" (Windows 7). Other common apps can't open the file (e.g. IE, Firefox, Notepad). Explorer will also not let me create files/dirs which are too long - it just refuses. Ditto for command line tool cmd.exe. So basically, one could look at it this way: a rouge tool has allowed the user to create a file which is not accessible by a lot of Windows (e.g. Explorer). I could take the view that I shouldn't have to deal with this. (As an aside, this isn't an vote of approval for a short max path length: I think 260 chars is a joke, I'm just saying that if Windows shell and some APIs can't handle 260 then why should I?). So, is this a fair view? Should I say "Not my problem"? Thanks! John

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • Nservicebus serization issue of derived types

    - by Tiju John
    Hi Guys, for the context setting, I am exchanging messages between my nServiceBus client and nSerivceBus server. its is the namespace xyz.Messages and and a class, Message : IMessage I have more messages that are in the other dlls, like xyz.Messages.Domain1, xyz.Messages.Domain2, xyz.Messages.Domain3. and messages that derive form that base message, Message. I have the endpoints defined as like : at client <UnicastBusConfig> <MessageEndpointMappings> <add Messages="xyz.Messages" Endpoint="xyzServerQueue" /> <add Messages="xyz.Messages.Domain1" Endpoint="xyzServerQueue" /> <add Messages="xyz.Messages.Domain2" Endpoint="xyzServerQueue" /> </MessageEndpointMappings> </UnicastBusConfig> at Server <UnicastBusConfig> <MessageEndpointMappings> <add Messages="xyz.Messages" Endpoint="xyzClientQueue" /> <add Messages="xyz.Messages.Domain1" Endpoint="xyzClientQueue" /> <add Messages="xyz.Messages.Domain2" Endpoint="xyzClientQueue" /> </MessageEndpointMappings> </UnicastBusConfig> and the bus initialized as IBus serviceBus = Configure.With() .SpringBuilder() .XmlSerializer() .MsmqTransport() .UnicastBus() .LoadMessageHandlers() .CreateBus() .Start(); now when i try sending instance of Message type or any type derived types of Message, it successfully sends the message over and at the server, i get the proper type. eg. Message message= new Message(); Bus.Send(message); // works fine, transfers Message type message = new MessageDerived1(); Bus.Send(message); // works fine, transfers MessageDerived1 type message = new MessageDerived2(); Bus.Send(message); // works fine, transfers MessageDerived2 type My problem arises when any type, say MessageDerived1, contains a member variable of type Message, and when i assign it to a derived type, the type is not properly transferred over the wire. It transfers only as Message type, not the derived type. public class MessageDerived2 : Message { public Message message; } MessageDerived2 messageDerived2= new MessageDerived2(); messageDerived2.message = new MessageDerived1(); message = messageDerived2; Bus.Send(message); // incorrect behaviour, transfers MessageDerived2 correctly, but looses type of MessageDerived2.Message (it deserializes as Message type, instead of MessageDerived1) any help is strongly appreciated. Thanks TJ

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • SWIG & Java Use of carrays.i and array_functions for C Array of Strings

    - by c12
    I have the below configuration where I'm trying to create a test C function that returns a pointer to an Array of Strings and then wrap that using SWIG's carrays.i and array_functions so that I can access the Array elements in Java. Uncertainties: %array_functions(char, SWIGArrayUtility); - not sure if char is correct inline char *getCharArray() - not sure if C function signature is correct String result = getCharArray(); - String return seems odd, but that's what is generated by SWIG SWIG.i: %module Test %{ #include "test.h" %} %include <carrays.i> %array_functions(char, SWIGArrayUtility); %include "test.h" %pragma(java) modulecode=%{ public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); } return ret; } %} Inline Header C Function: #ifndef TEST_H #define TEST_H inline static unsigned short numFoo() { return 3; } inline char *getCharArray(){ static char* foo[3]; foo[0]="ABC"; foo[1]="5CDE"; foo[2]="EEE6"; return foo; } #endif Java Main Tester: public class TestMain { public static void main(String[] args) { System.loadLibrary("TestJni"); char[] test = Test.getCharArrayImpl(); System.out.println("length=" + test.length); for(int i=0; i < test.length; i++){ System.out.println(test[i]); } } } Java Main Tester Output: length=3 ? ? , SWIG Generated Java APIs: public class Test { public static String new_SWIGArrayUtility(int nelements) { return TestJNI.new_SWIGArrayUtility(nelements); } public static void delete_SWIGArrayUtility(String ary) { TestJNI.delete_SWIGArrayUtility(ary); } public static char SWIGArrayUtility_getitem(String ary, int index) { return TestJNI.SWIGArrayUtility_getitem(ary, index); } public static void SWIGArrayUtility_setitem(String ary, int index, char value) { TestJNI.SWIGArrayUtility_setitem(ary, index, value); } public static int numFoo() { return TestJNI.numFoo(); } public static String getCharArray() { return TestJNI.getCharArray(); } public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); System.out.println("result=" + result); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); System.out.println("ret[" + i + "]=" + ret[i]); } return ret; } }

    Read the article

  • Visual Studio soft-crashing when encountering XAML Errors in initialize.

    - by Aren
    I've been having some serious issues with Visual Studio 2010 as of late. It's been crashing in a peculiar way when I encounter certain types of XAML errors during the InitializeComponent() of a control/window. The program breaks and visual studio gears up like it's catching an exception (because it is) and then stops midway displaying a broken highlight in my XAML file with no details as to what is wrong. Example: There is not pop outs, or details Anywhere about what is wrong, only a callstack that points to my InitializeComponent() call. Now normally I'd just do some trial and error to fix this problem, and find out where i messed up, but the real problem isn't my code. Visual Studio is rendered completely useless at this point. It reports my application still in "Running" mode. The Stop/Break/Restart buttons on the toolbar or in the menus don't do anything (but grey out). Closing the application does not stop this behaviour, closing visual studio gets it stuck in a massive loop where it yells at me complaining every file open is not in the debug project, then repeats this process when i have exausted every open file. I have to force-close devenv.exe, and after this happening 3-4 times in a row it's a lot of wasted time (as my projects are usually pretty big and studio can be quite slow @ loading). To the point Has anyone else experienced this? How can I stop studio from locking up. Can I at LEAST get information out of this beast another way so i can fix my XAML error sooner rather than after 3-4 trial-and-error compiles yielding the same crash? Any & All help would be appreciated. Visual Studio 2010 version: 10.0.30319.1RTM Edit & Update FWIW, mostly the errors that cause this are XamlParseExceptions (I figured this out after i found what was wrong with my XAML). I think I need to be clearer though, Im not looking for the solution to my code problem, as these are usually typos / small things, I'm looking for a solution to VStudio getting all buggered up as a result. The particular error in the above image that 100% for sure caused this was a XamlParseException caused by forgetting a Value attribute on a data trigger. I've fixed that part but it still doesn't tell my why my studio becomes a lump of neutered program when a perfectly normal exception is thrown in the parsing of the XAML. Code that will cause this issue (at least for me) This is the base template WPF Application, with the following Window.xaml code. The problem is a missing Value="True" on the <DataTrigger ...> in the template. It generates a XamlParseException and Visual Studio Crashes as described above when debugging it. Final Notes The following solutions did not help me: Restarting Visual Studio Rebooting Reinstalling Visual Studio

    Read the article

  • Read email from incoming mail server(POP)

    - by nccsbim071
    Hi, I have used an open source code from codeproject to read email from incoming mail server(POP Server). The code can be found at following location: http://www.codeproject.com/KB/IP/Pop3MimeClient.aspx So far it works fine i can read emails. My objective of using this code was to retrieve emails from POP server and process them. My problem is: If i use gmails pop server "pop.gmail.com" and run the appplication. I get only those emails that i have not retrieved since the last time i run the application. but if i use my clients pop server everytime i run the application i get all the emails in the pop server. for example: If i use gmail pop server: pop.gmail.com I have three emails in the pop server. I haven't run the application. I am running the application for the first time. Application reads the email, this time i will get 3 all the three email. I run the application second time, my application will not read any emails this time because i have already read the 3 existing one. This is fine, this is what i want. Now i send email to pop.gmail.com. I run the application again for the third time, this time i will only get the email that has just arrived that is the fourth one. This is good behaviour, this is what i want. But if i use my clients pop server: No matter how many times i run the application, it reads all the emails in the mail box. This will create problem for me, because i am thinking of building a window service that will read emails from pop server and process them. This service will run continuously. I will process emails in the pop serve then sleep for let's say 1 minute and the process the emails again. If the application downloaded from codeproject reads all the emails all the time, my clients mailbox can have like thousands for email in this mail box, so this would not be feasible for me. Is there some settings that is to be made at my client's pop server that will allow my application to retrieve only those emails that i have not read since last time i run the service or any help Please help, thanks,

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >