Search Results

Search found 5658 results on 227 pages for 'eric fail'.

Page 186/227 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Template Sort In C++

    - by wdow88
    Hey all, I'm trying to write a sort function but am having trouble figuring out how to initialize a value, and making this function work as a generic template. The sort works by: Find a pair =(ii,jj)= with a minimum value = ii+jj = such at A[ii]A[jj] If such a pair exists, then swap A[ii] and A[jj] else break; The function I have written is as follows: template <typename T> void sort(T *A, int size) { T min =453; T temp=0; bool swapper = true; while(swapper) { swapper = false; int index1 = 0, index2 = 0; for (int ii = 0; ii < size-1; ii++){ for (int jj = ii + 1; jj < size; jj++){ if((min >= (A[ii]+A[jj])) && (A[ii] > A[jj])){ min = (A[ii]+A[jj]); index1 = ii; index2 = jj; swapper = true; } } } if (!swapper) return; else { temp = A[index1]; A[index1] = A[index2]; A[index2] = temp; sort(A,size); } } } This function will successfully sort an array of integers, but not an array of chars. I do not know how to properly initialize the min value for the start of the comparison. I tried initializing the value by simply adding the first two elements of the array together (min = A[0] + A[1]), but it looks to me like for this algorithm it will fail. I know this is sort of a strange type of sort, but it is practice for a test, so thanks for any input.

    Read the article

  • PHP import functions

    - by ninuhadida
    Hi, I'm trying to find the best pragmatic approach to import functions on the fly... let me explain. Say I have a directory called functions which has these files: array_select.func.php stat_mediam.func.php stat_mean.func.php ..... I would like to: load each individual file (which has a function defined inside) and use it just like an internal php function.. such as array_pop(), array_shift(), etc. Once I stumbled on a tutorial (which I can't find again now) that compiled user defined functions as part of a PHP installation.. Although that's not a very good solution because on shared/reseller hosting you can't recompile the PHP installation. I don't want to have conflicts with future versions of PHP / other extensions, i.e. if a function named X by me, is suddenly part of the internal php functions (even though it might not have the same functionality per se) I don't want PHP to throw a fatal error because of this and fail miserably. So the best method that I can think of is to check if a function is defined, using function_exists(), if so throw a notice so that it's easy to track in the log files, otherwise define the function. However that will probably translate to having a lot of include/require statement in other files where I need such a function, which I don't really like. Or possibly, read the directory and loop over each *.func.php file and include_once. Though I find this a bit ugly. The question is, have you ever stumbled upon some source code which handled such a case? How was it implemented? Did you ever do something similar? I need as much ideas as possible! :)

    Read the article

  • Parsing Strings ( .crt files )

    - by user1661521
    Base Knowledge : I have a .crt file ( certification authoritie file ) and he is composed of many fields but in one line that resumes this question i have this : Certificate: ...(alot of stuff before)... Subject: C=US, ST=Maryland, L=Pasadena, O=Brent Baccala, OU=FreeSoft, CN=www.freesoft.org/[email protected] Subject Public Key Info: ...(alot of stuff after) and i need to parse the file to populate a .csv file and i have that done the problem that i need help is, i need to get the field: CN=www.fresoft.org but when i get this kind of CN=...(Value instead of the ...) with alot of slashes i get a error in the parsing like the raw string is: CN=foo/bar/the/hell/emailAddress=blablabla and i need only: foo/bar/the/hell and for a moment i got that in the correct column but when i dont have the emailAddress something just fail in my parsing and i then get in my CN .csv column the information wrong instead of |CN| foo/bar/the/hell i get: |CN| OU=FreeSoft, foo/bar/the/hell. I have this code doing the CN parsing: #!/bin/bash subject_line=$(echo $cert | grep -o "Subject:.*Subject Public Key Info") cn=$(echo $subject_line | grep -o "CN=.*" ) if [ $(echo $cn | grep -c ".*email.*") -gt 0 ]; then end_cn=$(echo $cn | grep -b -o emailAddress) end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-4} echo $common_name else end_cn=$(echo $cn | grep -b -o "Subject Public Key Info") end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-5} echo $common_name fi

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • Winsock tcp/ip Socket listening but connection refused, race condition?

    - by Wayne
    Hello folks. This involves two automated unit tests which each start up a tcp/ip server that creates a non-blocking socket then bind()s and listen()s in a loop on select() for a client that connects and downloads some data. The catch is that they work perfectly when run separately but when run as a test suite, the second test client will fail to connect with WSACONNREFUSED... UNLESS there is a Thread.Sleep() of several seconds between them??!!! Interestingly, there is retry loop every 1 second for connecting after any failure. So the second test loops for a while until timeout after 10 minutes. During that time, netstat -na shows the correct port number is in the LISTEN state for the server socket. So if it is in the listen state? Why won't it accept the connection? In the code, there are log messages that show the select NEVER even gets a socket ready to read (which means ready to accept a connection when it applies to a listening socket). Obviously the problem must be related to some race condition between finishing one test which means close() and shutdown() on each end of the socket, and the start up of the next. This wouldn't be so bad if the retry logic allowed it to connect eventually after a couple of seconds. However it seems to get "gummed up" and won't even retry. However, for some strange reason the listening socket SAYS it's in the LISTEN state even through keeps refusing connections. So that means it's the Windoze O/S which is actually catching the SYN packet and returning a RST packet (which means "Connection Refused"). The only other time I ever saw this error was when the code had a problem that caused hundreds of sockets to get stuck in TIME_WAIT state. But that's not the case here. netstat shows only about a dozen sockets with only 1 or 2 in TIME_WAIT at any given moment. Please help.

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

  • What do I name this class whose sole purpose is to report failure?

    - by Blair Holloway
    In our system, we have a number of classes whose construction must happen asynchronously. We wrap the construction process in another class that derives from an IConstructor class: class IConstructor { public: virtual void Update() = 0; virtual Status GetStatus() = 0; virtual int GetLastError() = 0; }; There's an issue with the design of the current system - the functions that create the IConstructor-derived classes are often doing additional work which can also fail. At that point, instead of getting a constructor which can be queried for an error, a NULL pointer is returned. Restructuring the code to avoid this is possible, but time-consuming. In the meantime, I decided to create a constructor class which we create and return in case of error, instead of a NULL pointer: class FailedConstructor : public IConstructor public: virtual void Update() {} virtual Status GetStatus() { return STATUS_ERROR; } virtual int GetLastError() { return m_errorCode; } private: int m_errorCode; }; All of the above this the setup for a mundane question: what do I name the FailedConstructor class? In our current system, FailedConstructor would indicate "a class which constructs an instance of Failed", not "a class which represents a failed attempt to construct another class". I feel like it should be named for one of the design patterns, like Proxy or Adapter, but I'm not sure which.

    Read the article

  • How to parse large xml files on google app engine?

    - by Alon Carmel
    Hey, I have fairly large xml file 1mb in size that i host on s3. I need to parse that xml file into my app engine datastore entirely. I have written a simple DOM parser that works fine locally but online it reaches the 30sec error and stops. I tried lowering the xml parsing by downloading the xml file into a BLOB at first before the parser then parse the xml file from blob. problem is that blobs are limited to 1mb. so it fails. I have multiple inserts to the datastore which cause it to fail on 30 sec. i saw somewhere that they recommend using the Mapper class and save some exception where the process stopped but as i am a python n00b i cant figure out how to implement it on a DOM parser or an SAX one (please provide an example?) on how to use it. i'm pretty much doing a bad thing right now and i parse the xml using php outside the app engine and push the data via HTTP post to the app engine using a proprietary API which works fine but is stupid and makes me maintain two codes. can you please help me out?

    Read the article

  • Cappuccino plist structure

    - by PurplePilot
    The question is does anyone know what the structure of the (type-2) plist files in Cappuccino are? In Cappuccino there is a lot of use made of plist files. Some such as info.plist (type-1) follow a recognizable structure. These are fine i can inderstand them. <plist version="1.0"> <dict> <key>CPApplicationDelegateClass</key> <string>DocumentController</string> <key>CPBundleDocumentTypes</key> <array> <dict> ..... etc However others (type-2) which are used for importing data, importing the pptx files to and from the slides application and i believe in Atlas the development tool do not. They have a structure like this 280NPLIST;1.0;D;K;4;$topD;K;23;DocumentPresentationKeyD;K;6;CP$UIDd;1;1E;E;K;8;$objectsA;S;5;$nullD;K;6;$classD;K;6;CP$UIDd;1;2E;K;23;SKPresentationSlideSizeD;K;6;CP$UIDd;1;3E;K;23;SKPresentationNotesSizeD;K;6;CP$UIDd;1;4E;K;20;SKPresentationSlidesD;K;6;CP$UIDd;1;5E;K;26;SKPresentationSlideMastersD;K;6;CP$UIDd;1;7E;K;19;SKPresentationThemeD;K;6;CP$UIDd;1;8E;E;D;K;10;$classnameS;14; Which appears to come on a single line regardless of size (i had one today with in excess of 1.3 million chars. Some of the structure is to do with character counting but i have had what look like valid files that fail and ones that look dubious do not. I suspect i have just asked a Tumbleweed badge question her but as i already have one it doesn't matter.

    Read the article

  • Can PHP dissect its own syntax?

    - by Nathan Long
    Can PHP dissect its own syntax? For example, I'd like to write a function that takes in an input like $object->attribute and says to itself: OK, he's giving me $foo->bar, which means he must think that $foo is an object that has a property called bar. Before I try accessing bar and potentially get a 'Trying to get property of non-object' error, let me check whether $foo is even an object. The end goal is to echo a value if it is set, and fail silently if not. I want to avoid repetition like this: <input value="<? if(is_object($foo) && is_set($foo->bar)){ echo $foo->bar; }?> "/> ...and to avoid writing a function that does the above, but has to have the object and attribute passed in separately, like this: <input value="<? echoAttribute($foo,'bar') ?>" /> ...but to instead write something which: preserves the object-attribute syntax is flexible: can also handle array keys or regular variables Like this: <input value="<? echoIfSet($foo->bar); ?> /> <input value="<? echoIfSet($baz['buzz']); ?> /> <input value="<? echoIfSet($moo); ?> /> But this all depends on PHP being able to tell me "what kind of thing am I asking for when I say $object->attribute or $array[$key]", so that my function can handle each according to its own type. Is this possible?

    Read the article

  • MSBuild script fails but produces no errors

    - by Kate
    I have a MSBuild script that I am executing through TeamCity. One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok. This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors. For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following: My questions are: Would it be possible that the MSBuild script has timed out? Once the task has completed it is after a certain timeout period so it just fails? Why would the build fail with no errors and no warnings? [05:39:06]: [Target "Obfuscate"] Finished. [05:39:06]: [Target "Obfuscate"] Saving exception map [05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds [05:49:22]: [Target "Obfuscate"] Done. [05:49:51]: MSBuild output: Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8) Done. (TaskId:8) Done executing task "VeilProject" -- FAILED. (TaskId:8) Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12) Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED. Project Performance Summary: 6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls 6535484 ms All 1 calls Target Performance Summary: 156 ms PreClean 1 calls 266 ms SetBuildVersionNumber 1 calls 2406 ms CopyFiles 1 calls 6532391 ms Obfuscate 1 calls Task Performance Summary: 16 ms MakeDir 2 calls 31 ms TeamCitySetBuildNumber 1 calls 31 ms Message 1 calls 62 ms RemoveDir 2 calls 234 ms GetAssemblyIdentity 1 calls 2406 ms Copy 1 calls 6528047 ms VeilProject 1 calls Build FAILED. 0 Warning(s) 0 Error(s) Time Elapsed 01:48:57.46 [05:49:52]: Process exit code: 1 [05:49:55]: Build finished

    Read the article

  • gethostbyname fails for local hostname after resuming from hibernate (Vista+7?)

    - by John
    Just wondering if anyone else has spotted this: On some user's machines running our software, occasionally the call to Win32 winsock gethostbyname fails with error code 11004. For the argument to gethostbyname, I'm passing in the result from gethostname. Now the docs say 11004 is WSANO_DATA. None of the descriptions seem to be relevant (it occurs if you pass in an IP6 address, but as I say, I'm passing in a hostname). Even more interesting is that the MSDN suggests that this combination (gethostname followed by gethostbyname) should never fail, not even if there is no IP address (in that case it would just return empty list of IPs). Here is the quote from the gethostname MSDN entry: ...it is guaranteed that the name returned will be successfully parsed by gethostbyname and WSAAsyncGetHostByName. It only ever happens after resuming from hibernate, in that short period when the network is restarting, and only on Vista/7 (well I've only seen it on Vista and 7). One theory I had was that it related to IP6. Maybe for a short period the network reports an IP6 address but not the corresponging IP4 address (I'm pretty sure that all the client machines are dual IP stack, but I could be wrong). I tried to reproduce by turning off my network card (to force no IP addresses) and couldn't reproduce. Anyone seen this before? Any ideas? John

    Read the article

  • Has any one used client_side_validations gem with Chosen.js dropdown?

    - by Abid
    I am using chosen.js (http://harvesthq.github.com/chosen/). I was wondering if anyone has been able to use chosen select boxes and client_side_validations together. The issue is that when we use chosen it hides the original select element and renders its own dropdown instead, and when we focus out the validation isn't called and also when the validation message is shown it is shown with the original select element so positioning of the error isnt also correct. What could be a good way to handle this, My be we can change some code inside ActionView::Base.field_error_proc which currently looks something like ActionView::Base.field_error_proc = Proc.new do |html_tag, instance| unless html_tag =~ /^<label/ %{<div class="field_with_errors">#{html_tag}<label for="#{instance.send(:tag_id)}" class="message">#{instance.error_message.first}</label></div>}.html_safe else %{<div class="field_with_errors">#{html_tag}</div>}.html_safe end end Any ideas ? Edit 1: I have the following solution that is working for me now. applied a class "chzn-dropdown" to all my selects that were being displayed by chosen used the following callback provided by client_side_validations Gem clientSideValidations.callbacks.element.fail = function(element, message, callback) { if (element.data('valid') !== false) { if(element.hasClass('dropdown')){ chzn_element = $('#'+element.attr('id')+'_chzn'); console.log(chzn_element); chzn_element.append(""+message+""); } else{ callback(); } } } Thanks

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • SWT - Table Row - Changing font color

    - by jkteater
    Is it possible to change the font color for a row based on a value in one of the columns? My table has a column that displays a status. The value of the column is going to either be Failed or Success. If it is Success I would like for that rows font be green. If the status equals Failed, I want that rows font be red. Is this possible, if so where would I put the logic. EDIT Here is my Table Viewer code, I am not going to show all the columns, just a couple private void createColumns() { String[] titles = { "ItemId", "RevId", "PRL", "Dataset Name", "Printer/Profile" , "Success/Fail" }; int[] bounds = { 100, 75, 75, 150, 200, 100 }; TableViewerColumn col = createTableViewerColumn(titles[0], bounds[0], 0); col.setLabelProvider(new ColumnLabelProvider() { public String getText(Object element) { if(element instanceof AplotResultsDataModel.ResultsData) { return ((AplotResultsDataModel.ResultsData)element).getItemId(); } return super.getText(element); } }); col = createTableViewerColumn(titles[1], bounds[1], 1); col.setLabelProvider(new ColumnLabelProvider() { public String getText(Object element) { if(element instanceof AplotResultsDataModel.ResultsData) { return ((AplotResultsDataModel.ResultsData)element).getRevId(); } return super.getText(element); } }); --ETC

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • Entity Framework. Updating EntityCollection using disconnected objects via navigation property.

    - by yougotiger
    I have a question, much liket this unanswered one. I'm trying to work with the entity framework, and having a tough time getting my foreign tables to update. I have something basically like this in the DB: Incident (table): -ID -other fields Responses (table): -FK:Incident.ID -other fields And and entities that match: Incident (entity) -ID -Other fields -Responses (EntityCollection of Responses via navigation property) Each Incident can have 0 or more responses. In my Webpage, I have a form to allow the user to enter all the details of an Incident, including a list of responses. I can add everything to the database when a new Incident is created, however I'm having difficulty with editing the Incident. When the page loads for edit, I populate the form and then store the responses in the viewstate. When the user changes the list of responses (adds one, deletes one or edits one). I store this back into the viewstate. Then when the user clicks the save button, I'd like to save the changes to the Incident and the Responses back to the DB. I cannot figure out how to get the responses from the detached viewstate into the Incident object so that they can be updated together. Currently when the user clicks save, I'm getting the Incident to edit from the db, making changes to the Incident's fields and then saving it back to the DB. However I can't figure out how to have the detached list of responses from the viewstate attach to the Incident. I have tried the following without success: Clearning the Incident.Responses collection and adding the ones from the viewstate back in: Incident.Responses.Clear() for each objResponse in Viewstate("Responses") Incident.Responses.add(objResponse) next Creating an EntityCollection from my list and then assiging that to the Incident.Responses Incident.Responses = EntityCollectionFromViewstateList Iterating through the responses in Incident.Response and assigning the corresponding object from viewstate: for each ObjResponse in Incident.Responses objResponse = objCorrespondingModifedResonseFromViewState Next These all fail, I'd like to be able to merge the changes into the Inicdent object so that when the BLL calls SaveChanges on the changes to both the Incident and Responses will happen at the same time. Any suggestions? I keep finding lots of stuff about assigning foreign keys (singular), but I haven't found a great solution for doing a set of entities assigned to another entity in this manner.

    Read the article

  • Getting elements children with certain tag jQuery

    - by johnnyArt
    I'm trying to get all the input elements from a certain form from jQuery by providing only the name of the form and only knowing that those wanted fields are input elements. Let's say: <form action='#' id='formId'> <input id='name' /> <input id='surname'/> </form> How do I access them individually with jQuery? I tried something like $('#formId > input') with no success, in fact an error came back on the console "XML filter is applied to non-XML value (function (a, b) {return new (c.fn.init)(a, b);})" Maybe I have to do it with .children or something like that? I'm pretty new at jQuery and I'm not really liking the Docs. It was much friendlier in Mootools, or maybe I just need to get used to it. Oh and last but not least, I've seen it asked before but no final answer, can I create a new dom element with jQuery and work with it before inserting it (if I ever do) into de code? In mootools, we had something like var myEl = new Element(element[, properties]); and you could then refer to it in further expressions, but I fail to understand how to do that on jQuery Thanks in advance.

    Read the article

  • Protected and Private methods

    - by cabaret
    I'm reading through Beginning Ruby and I'm stuck at the part about private and protected methods. This is a newbie question, I know. I searched through SO for a bit but I couldn't manage to find a clear and newbie-friendly explanation of the difference between private and protected methods. The book gives two examples, the first one for private methods: class Person def initialize(name) set_name(name) end def name @first_name + ' ' + @last_name end private def set_name(name) first_name, last_name = name.split(/\s+/) set_first_name(first_name) set_last_name(last_name) end def set_first_name(name) @first_name = name end def set_last_name(name) @last_name = name end end In this case, if I try p = Person.new("Fred Bloggs") p.set_last_name("Smith") It will tell me that I can't use the set_last_name method, because it's private. All good till there. However, in the other example, they define an age method as protected and when I do fred = Person.new(34) chris = Person.new(25) puts chris.age_difference_with(fred) puts chris.age It gives an error: :20: protected method 'age' called for #<Person:0x1e5f28 @age=25> (NoMethodError) I honestly fail to see the difference between the private and protected methods, it sounds the same to me. Could someone provide me with a clear explanation so I'll never get confused about this again? I'll provide the code for the second example if necessary.

    Read the article

  • How can I diagnose "Cannot determine peer address" in my Perl TCP script?

    - by MadBoy
    I've this little script which does it's job pretty well but sometimes it tends to fail. It fails in 2 cases: with error send: Cannot determine peer address at ./tcp-new.pl line 52 with no output or anything, it just fails to deliver what it got to connected Tcp Client. Usually it happens after I disconnect from server, go home and connect it again. To fix this restart is required and it starts working. Sometimes this problem is followed by problem mentioned in point 1. Note: it's not problem when I disconnect and reconnect to it again within short amount of time (unless error nr 1 happens). So can anyone help me make this code be a bit more stable so I don't have to restart it every day? #!/usr/bin/perl use strict; use warnings; use IO::Socket; use IO::Select; my $tcp_port = "10008"; my $udp_port = "2099"; my $tcp_socket = IO::Socket::INET->new( Listen => SOMAXCONN, LocalPort => $tcp_port, Proto => 'tcp', ReuseAddr => 1, ); my $udp_socket = IO::Socket::INET->new( LocalPort => $udp_port, Proto => 'udp', ); my $read_select = IO::Select->new(); my $write_select = IO::Select->new(); $read_select->add($tcp_socket); $read_select->add($udp_socket); while (1) { my @read = $read_select->can_read(); foreach my $read (@read) { if ($read == $tcp_socket) { my $new_tcp = $read->accept(); $write_select->add($new_tcp); } elsif ($read == $udp_socket) { my $recv_buffer; $udp_socket->recv($recv_buffer, 1024, undef); my @write = $write_select->can_write(); foreach my $write (@write) { $write->send($recv_buffer); } } } }

    Read the article

  • problem writing xml to file with .net mvc - timeout?

    - by Mark
    Hey, so having an issue with writing out to an xml file. Works fine for single requests via the browser, but when I use something like Charles to perform 5-10 repeated requests concurrently several of them will fail. The trace simply shows a 500 error with no content inside, basically I think they start timing out waiting for write access or something... This method is inside my repository class, have also attempted to have repository instance as a singleton but doesn't appear to make any difference.. Any help would be much appreciated. Cheers public void Add(Request request) { try { XDocument requests; XmlReader xmlReader; using (xmlReader = XmlReader.Create(_requestsFilePath)) { requests = XDocument.Load(xmlReader); XElement xmlRequest = new XElement("request", new XElement("code", request.code), new XElement("date", request.date), new XElement("email", new XCData(request.email)), new XElement("name", new XCData(request.name)), new XElement("recieveOffers", request.recieveOffers) ); requests.Root.Element("requests").Add(xmlRequest); xmlReader.Close(); } requests.Save(_requestsFilePath); } catch (Exception ex) { HttpContext.Current.Trace.Warn("Error writing to file: "+ex); } }

    Read the article

  • How do I abort a socket.recv() from another thread in python?

    - by Samuel Skånberg
    I have a main thread that waits for connection. It spawns client threads that will echo the response from the client (telnet in this case). But say that I want to close down all sockets and all threads after some time, like after 1 connection. How would I do? If I do clientSocket.close() from the main thread, it won't stop doing the recv. It will only stop if I first send something through telnet, then it will fail doing further sends and recvs. My code look like this: # Echo server program import socket from threading import Thread import time class ClientThread(Thread): def __init__(self, clientSocket): Thread.__init__(self) self.clientSocket = clientSocket def run(self): while 1: try: # It will hang here, even if I do close on the socket data = self.clientSocket.recv(1024) print "Got data: ", data self.clientSocket.send(data) except: break self.clientSocket.close() HOST = '' PORT = 6000 serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serverSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) serverSocket.bind((HOST, PORT)) serverSocket.listen(1) clientSocket, addr = serverSocket.accept() print 'Got a new connection from: ', addr clientThread = ClientThread(clientSocket) clientThread.start() time.sleep(1) # This won't make the recv in the clientThread to stop immediately, # nor will it generate an exception clientSocket.close()

    Read the article

  • Problem when copying array of different types using Arrays.copyOf

    - by Shervin
    I am trying to create a method that pretty much takes anything as a parameter, and returns a concatenated string representation of the value with some delimiter. public static String getConcatenated(char delim, Object ...names) { String[] stringArray = Arrays.copyOf(names, names.length, String[].class); //Exception here return getConcatenated(delim, stringArray); } And the actual method public static String getConcatenated(char delim, String ... names) { if(names == null || names.length == 0) return ""; StringBuilder sb = new StringBuilder(); for(int i = 0; i < names.length; i++) { String n = names[i]; if(n != null) { sb.append(n.trim()); sb.append(delim); } } //Remove the last delim return sb.substring(0, sb.length()-1).toString(); } And I have the following JUnit test: final String two = RedpillLinproUtils.getConcatenated(' ', "Shervin", "Asgari"); Assert.assertEquals("Result", "Shervin Asgari", two); //OK final String three = RedpillLinproUtils.getConcatenated(';', "Shervin", "Asgari"); Assert.assertEquals("Result", "Shervin;Asgari", three); //OK final String four = RedpillLinproUtils.getConcatenated(';', "Shervin", null, "Asgari", null); Assert.assertEquals("Result", "Shervin;Asgari", four); //OK final String five = RedpillLinproUtils.getConcatenated('/', 1, 2, null, 3, 4); Assert.assertEquals("Result", "1/2/3/4", five); //FAIL However, the test fails on the last part with the exception: java.lang.ArrayStoreException at java.lang.System.arraycopy(Native Method) at java.util.Arrays.copyOf(Arrays.java:2763) Can someone spot the error?

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >