Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 848/1027 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • NSUndoManager with Core Data - Redo not working

    - by CJ
    I have a Core Data document-based app which support undo/redo via the built-in NSUndoManager associated with the NSManagedObjectContext. I have a few actions set up which perform numerous tasks within Core Data, wrap all these tasks into an undo group via beginUndoGrouping/endUndoGrouping, and are processed by the NSUndoManager. Undo works fine. I can perform several successive actions, and each then undo each one of them successively and my app's state is maintained correctly. However, the "Redo" menu item is never enabled. This means that the NSUndoManager is telling the menu that there are no items to redo. I am wondering why the NSUndoManager is seemingly forgetting about items once they are undone, and not allowing redos to occur? One thing I should mention is that I'm disabling undo registration after a document is opened/created. When I perform an action, I call enableUndoRegistration, beginUndoGrouping, perform the action, then call processPendingChanges, setActionName:, endUndoGrouping, and finally disableUndoRegistration. This makes sure that only specific actions are undoable, and any other data changes I make outside of these go unnoticed to the NSUndoManager. This may be a part of the issue, but if so I'm wondering why it's affecting redo? Thanks in advance.

    Read the article

  • Hot to make COM ActiveX object work in IE 64 bit?

    - by Kurtevich
    Hi! I have a COM object embeded in ASP.NET page using <object classid="clsid:XXX...">. It works in IE 32 bit, but does not work in IE 64 bit - can't access its functions. There are no error messages, no event logs where I can get some information. The dll is in C#, includes COM visible class, compiled for Any CPU (though I also tried x86), and registered during client installation by executing regasm. This creates registry keys, well everything works fine except for IE 64. I searched internet about the issue or at least some guidlines and didn't find anything. I received an answer on another forum, something about _MERGE_PROXYSTUB (I guess it's preprocessor definition?) and ProxyStubClsid32 registry key, but not very detailed. Well, I searched again, didn't find much, and experimented: rebuilt with _MERGE_PROXYSTUB defined, created ProxyStubClsid32 keys everywhere, but with no result. What can be at least possible solutions or points to look at? Maybe there is a way at least to get the logs about why IE 64 can't access it?

    Read the article

  • python __import__() imports from 2 different directories when same module exists in 2 locations

    - by programer_gramer
    Hi, I have a python application , which has directory structure like this. -pythonapp -mainpython.py -module1 -submodule1 -file1.py -file2.py -submodule2 -file3.py -file3.py -submodule3 -file1.py -file2.py -file5.py -file6.py -file7.py when I try to import the python utilities(from mainpython.py) under submodule3 , I get the initial 2 files from submodule1.(please note that submodule1 and 3 have 2 different files with the same name). However the same import works fine when there is no conflict i.e it correctly imports file 5,6,7 from submodule3. Here is the code : name=os.path.splitext(os.path.split("module1\submodule3\file1.py")[1])[0] -- file1.py name here is passed dynamically. module = import(name) //Here is name is like "file1" it works(but with the above said issue, though, when passes the name of the file dynamically), but if I pass complete package as "module1.submodule1.file1" it fails with an ImportError saying that "no module with name file1" Now the question is how do we tell the interpreter to use only the ones under "module1.submodule3.file2"? I am using python This is really urgent one and I have run out of all the tries. Hope some experienced python developers can solve this for me?

    Read the article

  • Instead of trigger in SQL Server - looses SCOPE_IDENTITY?

    - by kastermester
    Hey StackOverflow, I am (once again) having some issues with some SQL. I have a table, on which I have created an INSTEAD OF trigger to enforce some buissness rules (rules not really important). This works as intended. My issue is, that now when inserting data into this table, SCOPE_IDENTITY() now returns a NULL value, rather than the actual inserted identity, my guess is that this is because it is now out of scope - but then how do I get this in scope? I am using SQL Server 2008. Per request, here's the SQL: Insert + Scope code INSERT INTO [dbo].[Payment]([DateFrom], [DateTo], [CustomerId], [AdminId]) VALUES ('2009-01-20', '2009-01-31', 6, 1) SELECT SCOPE_IDENTITY() Trigger: CREATE TRIGGER [dbo].[TR_Payments_Insert] ON [dbo].[Payment] INSTEAD OF INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; IF NOT EXISTS(SELECT 1 FROM dbo.Payment p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo) ) AND NOT EXISTS (SELECT 1 FROM Inserted p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom <> p.DateFrom AND i.DateTo <> p.DateTo) AND ((i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo)) ) BEGIN INSERT INTO dbo.Payment (DateFrom, DateTo, CustomerId, AdminId) SELECT DateFrom, DateTo, CustomerId, AdminId FROM Inserted END ELSE BEGIN ROLLBACK TRANSACTION END END The code did work before the creation of this trigger, also I am using LINQ to SQL in C# and as far as I can see, I have no way of changing SCOPE_IDENTITY to @@IDENITY - is there really no way out of this one?

    Read the article

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • SQLAlchemy DetachedInstanceError with regular attribute (not a relation)

    - by haridsv
    I just started using SQLAlchemy and get a DetachedInstanceError and can't find much information on this anywhere. I am using the instance outside a session, so it is natural that SQLAlchemy is unable to load any relations if they are not already loaded, however, the attribute I am accessing is not a relation, in fact this object has no relations at all. I found solutions such as eager loading, but I can't apply to this because this is not a relation. I even tried "touching" this attribute before closing the session, but it still doesn't prevent the exception. What could be causing this exception for a non-relational property even after it has been successfully accessed once before? Any help in debugging this issue is appreciated. I will meanwhile try to get a reproducible stand-alone scenario and update here. Update: This is the actual exception message with a few stacks: File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 159, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 377, in get value = callable_(passive=passive) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/state.py", line 280, in __call__ self.manager.deferred_scalar_loader(self, toload) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/mapper.py", line 2323, in _load_scalar_attributes (state_str(state))) DetachedInstanceError: Instance <ReportingJob at 0xa41cd8c> is not bound to a Session; attribute refresh operation cannot proceed The partial model looks like this: metadata = MetaData() ModelBase = declarative_base(metadata=metadata) class ReportingJob(ModelBase): __tablename__ = 'reporting_job' job_id = Column(BigInteger, Sequence('job_id_sequence'), primary_key=True) client_id = Column(BigInteger, nullable=True) And the field client_id is what is causing this exception with a usage like the below: Query: jobs = session \ .query(ReportingJob) \ .filter(ReportingJob.job_id == job_id) \ .all() if jobs: # FIXME(Hari): Workaround for the attribute getting lazy-loaded. jobs[0].client_id return jobs[0] This is what triggers the exception later out of the session scope: msg = msg + ", client_id: %s" % job.client_id

    Read the article

  • Facebook IOS SDK: Error in Publish Story Dialog

    - by lividsquirrel
    I've successfully set up the "DemoApp" project from the Facebook IOS SDK to use my "OKC ThunderCast" Facebook application. I have also configured another "Tester" application from scratch to successfully use the Facebook SDK and publish stories to my news feed. However, in my production application, I always get this result when calling the "dialog" method. The full description of the error message is "Error on line 52 at column 17: Opening and ending tag mismatch: div line 0 and body" Here's a detailed walkthrough of all of my code to make sure nothing is missed. 1) A UIViewController calls the "authorize" method NSArray *fbPerms = [NSArray arrayWithObjects:@"read_stream", @"offline_access", nil]; [[FacebookSingleton sharedInstance].facebook authorize:fbPerms delegate:self]; Note: The FacebookSingleton is a class I wrote that always returns a single instance of the "Facebook" class. I am using it successfully in other applications. 2) Safari is opened and the user is successfully authenticated and authorized 3) The application is called back and the "handleOpenUrl" method is called, which calls the "fbDidLogin" method of the UIViewController - (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url { Facebook *fb = [FacebookSingleton sharedInstance].facebook; return [fb handleOpenURL:url]; } 4) The same UIViewController handles the "fbDidLogin" event, and calls the "dialog" method - (void)fbDidLogin { [[FacebookSingleton sharedInstance].facebook dialog:@"feed" andDelegate:self]; } I also have the necessary "URL Schemes" and "URL Types" entries in the .plist file. To my eyes, I am using exactly the same code in the "DemoApp", "Tester", and production applications. But while the DemoApp and Tester work, I always see this HTML error in the feed dialog in my production application. Has anyone seen a similar issue? Could it be related to the Facebook "Bundle ID" setting in the Facebook application settings? Is there some build or .plist setting that is different? I have invested a great deal of time into troubleshooting with no success in several weeks. Thanks in advance...

    Read the article

  • "error creating files compilation failed" when publishing for iOS on Flash CS5.5

    - by user1662660
    I'm currently developing an app using Flash CS5.5 for iOS. All of my .ipa files have been created and tested with no problems so far. Until now. I'm using Windows. It only started today and happens to every file I try to publish. I get the error "error creating files. compilation failed..." That is the only piece of information I am given about the error. All of my certificates and provisioning profiles are legit and still valid. Has anyone had any experience with this before? Since posting the question I have found a resolution that I will post for anyone who encounters the same problem. In this case I was unable to publish my .ipa file as I was opening the document (and Flash) from the start menu in Windows 7. To fix this instance of the issue I simply opened Flash first and then opened my document and I was able to publish.

    Read the article

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • A simple WCF Service (POX) without complex serialization

    - by jammer59
    I'm a complete WCF novice. I'm trying to build a deploy a very, very simple IIS 7.0 hosted web service. For reasons outside of my control it must be WCF and not ASMX. It's a wrapper service for a pre-existing web application that simply does the following: 1) Receives a POST request with the request body XML-encapsulated form elements. Something like valuevalue. This is untyped XML and the XML is atomic (a form) and not a list of records/objects. 2) Add a couple of tags to the request XML and the invoke another HTTP-based service with a simple POST + bare XML -- this will actually be added by some internal SQL ops but that isn't the issue. 3) Receive the XML response from the 3rd party service and relay it as the response to the original calling client in Step 1. The clients (step 1) will be some sort of web-based scripting but could be anything .aspx, python, php, etc. I can't have SOAP and the usual WCF-based REST examples with their contracts and serialization have me confused. This seems like a very common and very simple problem conceptually. It would be easy to implement in code but IIS-hosted WCF is a requirement. Any pointers?

    Read the article

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • Memory Leak question

    - by franz
    I am having a memory leak issue with the following code. As much as I can tell I don't see why the problem persists but it still does not release when called. I am detecting the problem in instruments and the following code is keeping its "cards" classes alive even when it should had released them. Any help welcome. ... ... -(id)initDeckWithCardsPicked: (NSMutableArray*)cardsPicked andColors:(NSMutableArray*)cardColors { self = [self init]; if (self != nil) { int count = [cardsPicked count]; for (int i=0; i<count; i++) { int cardNum = [[cardsPicked objectAtIndex:i] integerValue]; Card * card = [[MemoryCard alloc] initWithSerialNumber:cardNum position: CGPointZero color:[cardColors objectAtIndex:i]]; [_cards addObject: card]; [card release]; } } return self; } - (id) init { self = [super init]; if (self != nil) { self.bounds = (CGRect){{0,0},[Card cardSize]}; self.cornerRadius = 8; self.backgroundColor = kAlmostInvisibleWhiteColor; self.borderColor = kHighlightColor; self.cards = [NSMutableArray array]; } return self; } ... ...

    Read the article

  • Sharepoint isn't accepting new Credentials initially when switching users.

    - by Tiziani
    Hi all, I have a standard website (one webapplication and one site collection) with some custom pages and webparts. The issue I'm having is that when I try to switch users, using the "Sign In As a Different User" and entering new credentials (even for another site collection admin account), IE tries the account three times, and then it presents a 401 Access Denied screen. After that, if I erase all the stuff of access denied page from the browser's url, I'm logged as the new account I just had entered and was not accepted. After researching for a while on google, I found a KB ( http://support.microsoft.com/kb/970814 ) that might relate, but just tested here and it didn't work at all. The modified method suggested by the KB is the following: function LoginAsAnother(url, bUseSource) { document.cookie="loginAsDifferentAttemptCount=0"; if (bUseSource=="1") { GoToPage(url); } else { //var ch=url.indexOf("?") =0 ? "&" : "?"; //url+=ch+"Source="+escapeProperly(window.location.href); //STSNavigate(url); document.execCommand("ClearAuthenticationCache"); } } But after making this change, it no longer asks for a new credential. Any ideas?

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Amazon API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • MySQL-python 1.2.3 and OS X 10.5: 64- or 32-bit?

    - by Dave Everitt
    I've been happily using Django and MySQL in development on an existing machine running OS X 10.4 Tiger, and have set up a similar environment in 10.5 Leopard on a new 64-bit MacBook, with a working MySQL and Python 2.6.4. However, now I want them to communicate, easy_install MySQL-python gave ld warnings that the file is not of the required architecture, which led me to test my Python 2.4.6 install (from the Mac OS X disc image): >>> import sys >>> sys.maxint 2147483647 Ah. So my Python install appears to be 32-bit and (I think?) won't install MySQL-python for my 64-bit MySQL. There are lots of hacks out there for MySQL-python on OS X (mostly 1.2.2), but - after hours of reading - I'm pretty sure they won't fix this architecture mismatch. So I'm stuck because I can't decide whether to: give up, remove the 64-bit MySQL install (thorough methods, please?) and use the 32-bit MySQL disc image instead; re-install Python in 64-bit mode from the tarball, --with-universal archs-64-bit and --enable-universalsdk= as detailed in Python.org's 2.6 news. So my questions for anyone who has encountered this issue are: Is installing 64-bit Python on OS X 10.5 worth bothering with? If so, (naive, lazy question!) how are the two required arguments combined? If I just skip along in 32-bit (as on my working setup) what am I missing? I'm after a hassle-free install that's easy to reproduce on other machines (possible student use) so I'd really welcome your opinions, please!

    Read the article

  • Internal bug tracking tickets - Redmine, Trac, or JIRA

    - by Tai Squared
    I've been looking at setting up Redmine, Trac, or JIRA to track issues. I want to be able to have my development team create internal tickets that are never seen by clients, while clients can create/edit tickets that are seen by the internal team. From the Trac documentation, you can set permissions to create or view tickets, but it doesn't seem to allow for viewing only certain tickets. It may be possible with Trac Fine Grained Permissions, but doesn't appear so. The Redmine documentation mentions: Define your own roles and set their permissions in a click but doesn't appear to have the level of granularity. From the JIRA documentation: At the moment JIRA is only able to support security at a project level or issue level. Currently there is no field level security available. According to this question, Redmine doesn't support internal tickets, so you would have to use multiple projects. I don't want a situation where I would have to create multiple projects - one internal, one external and have the external tickets brought into the internal repository. It seems as this would lead to unnecessary overhead and inevitably, the projects wouldn't be in sync. Is there any way with any of these products (possibly through a plug-in if not in the core product itself) to specify these permissions, or simplify having two projects with different users and permissions that must still share information?

    Read the article

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • Error while using JSFUnit/HtmlUnit/CSSParser

    - by brianf
    We've just recently converted our project to using Maven for builds and dependency management, and after the conversion I'm getting the following exception while trying to run any JSFUnit tests in my project. Exception class=[java.lang.UnsupportedOperationException] com.gargoylesoftware.htmlunit.ScriptException: CSSRule com.steadystate.css.dom.CSSCharsetRuleImpl is not yet supported. at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:527) at net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:537) ... All the dependencies and JARs for JSFUnit were pulled with Maven using the JBoss repository (http://repository.jboss.com/maven2/). We're using the following dependencies in the project: jboss-jsfunit-core 1.2.0.Final jboss-jsfunit-richfaces 1.2.0.Final richfaces-ui 3.3.2.GA openfaces 2.0 JSF 1.2_12 Facelets 1.1.14 Before the dependencies were being managed by Maven, we were able to run our JSFUnit tests just fine. I was able to semi-fix the issue by using a ss_css2.jar file that someone had tucked into our WEB-INF/lib directory (from before the Maven conversion). I'm hoping to find out if there's something else I can do to fix the dependencies in Maven rather than resorting to managing some of the dependencies myself.

    Read the article

  • Grouped UITableView Footer Sometimes Hidden On Quick Scroll

    - by jdandrea
    OK, this one is a puzzler. There is one similar post but it's not similar enough to count, so I'm posting this one. :) I've got a grouped UITableView with a header and footer. The footer includes two UIButton views, side-by-side. Nothing major. Now … there is a toggle button in a UIToolbar at the bottom for more/less info in this table view. So I build my index paths to delete/insert with fade row animation, all the usual ingredients, sandwiched between beginUpdates and endUpdates calls on the UITableView … and this works fine! In also happens that my footer can sometimes be pushed off past the bottom of the display. Here's where it gets weird. If I drag my finger up the display, scrolling the view upward, I should see that footer eventually, right? Well … most of the time I do. BUT, if I flick my finger up, for a faster scroll, the footer is missing. Even if you try to tap in that area - no response. However, if I scroll back down again, just to hide that footer (or rather hide the area where the footer would normally be), and then scroll back up, it's there once again! This only happens when inserting rows. If I delete rows, the footer stays put … unless of course it was already hidden and I didn't perform the aforementioned incantation to get it back. :) I am trying to trace through this, but to no avail. I suppose tracing through scroll operations is a bit of a crazy proposition! Perhaps some creative logging … suggestions, anyone? Or is this a known issue in 3.1 where row insert/deletes are concerned? (I don't recall seeing it until 3.1.)

    Read the article

  • [Google Maps] Trouble with invalid argument when switching jQueryUI based tabs

    - by Chad
    Here's a page with the issue To reproduce the error, using IE - click the directions tab, then any of the others. What I'm trying to do is this: On page load, do nothing really. However, when the directions tab loads - setup the map. Like so: $('#tabs').bind('tabsshow', function(event, ui) { if (ui.panel.id == "tabs-5") { // get map for directions var dirMap = new GMap2($("div#dirMap").get(0)); dirMap.setCenter(new GLatLng(35.79648921414565,139.40663874149323), 12); dirMap.enableScrollWheelZoom(); dirMap.addControl(new PanoMapTypeControl()); geocoder = new GClientGeocoder(); $("#dirMap").resizable({ stop: function() { dirMap.checkResize(); } }); // clear dirText $("div#dirMapText").html(""); dirMap.clearOverlays(); var polygon = new GPolygon([new GLatLng(35.724496338474104,139.3444061279297),new GLatLng(35.74748750802863,139.3363380432129),new GLatLng(35.75765724051559,139.34303283691406),new GLatLng(35.76545779822543,139.3418312072754),new GLatLng(35.767547103447725,139.3476676940918),new GLatLng(35.75835374997911,139.34955596923828),new GLatLng(35.755149755962755,139.3567657470703),new GLatLng(35.74679090345495,139.35796737670898),new GLatLng(35.74762682821177,139.36294555664062),new GLatLng(35.744422402303826,139.36346054077148),new GLatLng(35.74860206266584,139.36946868896484),new GLatLng(35.735644401200986,139.36843872070312),new GLatLng(35.73843117306677,139.36174392700195),new GLatLng(35.73592308277646,139.3531608581543),new GLatLng(35.72686543236113,139.35298919677734),new GLatLng(35.724496338474104,139.3444061279297)], "#f33f00", 5, 1, "#ff0000", 0.2);dirMap.addOverlay(polygon); // load directions directions = new GDirections(dirMap, $("div#dirMapText").get(0)); directions.load("from: [email protected],139.37083393335342 to: Ruby [email protected],139.40663874149323"); } }); What the heck is causing the error? The IE javascript debugger claims the error lies in main.js, line 139 character 28. (the google maps api file). Which is this line: function zf(a,b){a=a.style;a.width=b.getWidthString();a.height=b.getHeightString()} Any ideas? Thanks in advance!

    Read the article

  • How to mimic built-in .NET serialization idioms?

    - by Matt Enright
    I have a library (written in C#) for which I need to read/write representations of my objects to disk (or to any Stream) in a particular binary format (to ensure compatibility with C/Java library implementations). The format requires a fair amount of bit-packing and some DEFLATE'd bytestreams. I would like my library, however, to be as idiomatic .NET as possible, however, and so would like to provide an API as close as possible to the normal binary serialization process. I'm aware of the ability to implement the IFormatter interface, but being that I really am unable to reuse any part of the built-in serialization stack, is it worth doing this, or will it just bring unnecessary overhead. In other words: Implement IFormatter and co. OR Just provide "Serialize"/"Deserialize" methods that act on a Stream? A good point brought up below about needing the serialization semantics for any case involving Remoting. In a case where using MarshalByRef objects is feasible, I'm pretty sure that this won't be an issue, so leaving that aside are there any benefits or drawbacks to using the ISerializable/IFormatter versus a custom stack (or, is my understanding remoting incorrectly)?

    Read the article

  • How to implement Session timeout in Web Server Side?

    - by Morgan Cheng
    I beheld a web framework implementing in-memory session in this way. The session object is added to Cache with timeout. When the time is out, the session is removed from Cache automatically. To protect race condition, each request should acquire lock on given session object to proceed. Each request will "touch" the session in Cache to refresh the timeout. Everything looks fine, until this scenario is discovered. Say, one operation takes a long time, longer than timeout. Another request comes and wait on session lock which is currently hold by the long-time request. Finally, the long-time request is over, it releases the lock. But, since it already takes longer time than timeout, the session object is already removed from Cache. This is obvious because the only request holding the lock doesn't have a chance to "touch" the session object in cache. The second request gets the lock but cannot retrieve the expired Session object. Oops... To fix this issue, the second request has to re-create the Session object. But, this is just like digging a buried dead body from tomb and try to bring it back to life. It causes buggy code. I'm wondering what's the best way to implement timeout in session to handle such scenario. I know that current platform must have good session mechanism. I just want to know the under-the-hood how.

    Read the article

  • Filter a date property between a begin and end Dates with JDOQL

    - by Sergio del Amo
    I want to code a function to get a list of Entry objects whose date field is between a beginPeriod and endPeriod I post below a code snippet which works with a HACK. I have to substract a day from the begin period date. It seems the condition great or equal does not work. Any idea why I have this issue? public static List<Entry> getEntries(Date beginPeriod, Date endPeriod) { /* TODO * The great or equal condition does not seem to work in the filter below * Substract a day and it seems to work */ Calendar calendar = Calendar.getInstance(); calendar.set(beginPeriod.getYear(), beginPeriod.getMonth(), beginPeriod.getDate() - 1); beginPeriod = calendar.getTime(); PersistenceManager pm = JdoUtil.getPm(); Query q = pm.newQuery(Entry.class); q.setFilter("this.date >= beginPeriodParam && this.date <= endPeriodParam"); q.declareParameters("java.util.Date beginPeriodParam, java.util.Date endPeriodParam"); List<Entry> entries = (List<Entry>) q.execute(beginPeriod,endPeriod); return entries; }

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >