Search Results

Search found 12415 results on 497 pages for 'persistent store'.

Page 79/497 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • ArrayList throwing exception on retrieval from google datastore (with gwt, java)

    - by sumeet
    I'm using Google Web Toolkit with java and google datastore as database. The entity class has arraylist and on trying to retrieve the data from data base I'm getting the exception: Type 'org.datanucleus.sco.backed.ArrayList' was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded. For security purposes, this type will not be serialized. I'm using JPA. Entity code: package com.ver2.DY.client; import java.io.Serializable; import java.util.ArrayList; import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.gwt.user.client.rpc.IsSerializable; @PersistenceCapable public class ChatInfo implements Serializable, IsSerializable{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long topicId; @Persistent private String chatTopic; @Persistent private ArrayList messages = new ArrayList(); @Persistent private boolean isFirstPost; public ChatInfo() { } public Long getTopicId() { return topicId; } public void setTopicId(Long topicId) { this.topicId = topicId; } public String getChatTopic() { return chatTopic; } public void setChatTopic(String chatTopic) { this.chatTopic = chatTopic; } public ArrayList getMessages() { return messages; } public void addMessage(String newMsg) { messages.add(newMsg); } public boolean isFirstPost() { return isFirstPost; } public void setFirstPost(boolean isFirstPost) { this.isFirstPost = isFirstPost; } } Method in db class: @Transactional public ChatInfo[] getAllChat() { PersistenceManager pm = PMF.get().getPersistenceManager(); List chats = null; ChatInfo[] infos = null; String query = "select from " + ChatInfo.class.getName(); try{ chats = (List) pm.newQuery(query).execute(); infos = new ChatInfo[chats.size()]; for(int i=0;i } It is a bit strange because earlier I was able to insert and retrieve the data but it now throwing an exception. On searching the web I could find that I need to convert the Arraylist from some DataNucleus type to java util but not sure how to do that.

    Read the article

  • Core Data multi-threading

    - by JK
    My app starts by presenting a tableview whose datasource is a Core Data SQLite store. When the app starts, a secondary thread with its own store controller and context is created to obtain updates from the web for data in the store. However, any resulting changes to the store are not notified to the fetchedresults controller (I presume because it has its own coordinator) and consequently the table is not updated with store changes. What would be the most efficient way to refresh the context on the main thread? I am considering tracking the objectIDs of any objects changed on the secondary thread, sending those to the main thread when the secondary thread completes and invoking "[context refreshObject:....] Any help would be greatly appreciated.

    Read the article

  • ExtJS combo setting problem

    - by Hubidubi
    Hi I run into an interesting problem while was using combos in input form. My form contains combos that get data from json stores. It works fine when adding new record, but when the form is opened for editing an existing record, sometimes the id appears as selected not its value (eg: there's 5 instead of "apple"). I think it tries to set the value before it finishes loading the combo. Is there a way to solve this? I put the code down here that creates combos: function dictComboMaker( store, fieldLabel, hiddenName, name, allowBlank, myToolTipp ) { comboo = { xtype : 'combo', id: 'id-'+name, allowBlank: allowBlank, fieldLabel : fieldLabel, forceSelection : true, displayField : 'value', valueField : 'id', editable: false, name: name, hiddenName : hiddenName, minChars : 2, mode: 'remote', triggerAction : 'all', store : store }; function dictJsonMaker(url) { store = new Ext.data.JsonStore({ root : 'results', // 1 fields : [ 'id','value' ], url : url, autoLoad: true}); return store; } var comboKarStore = dictJsonMaker('/service/karok'); var comboKar= dictComboMaker(comboKarStore, 'Kar', 'karid', 'kar', false, ''); // then comboKar is added to the form Hubidubi

    Read the article

  • Shell script query

    - by WENzER
    Hi, I am having database. for that i want to get the script of all Store procedure. I have used dba_source table for getting script of Store procedure. Now I want to write a shell script which will give me all the scripts of store procedure in the seperate files. The approach i have applied that first I have written a query which will give the name of all store procedures and I am storing these names in seperate file. Now I am running a loop for getting sript of store procedures by using the names stored in file. All this i am able to do seperatly but not in shell script since I am new to it Please help me Thanks

    Read the article

  • How do I update with a newly-created detached entity using NHibernate?

    - by Daniel T.
    Explanation: Let's say I have an object graph that's nested several levels deep and each entity has a bi-directional relationship with each other. A -> B -> C -> D -> E Or in other words, A has a collection of B and B has a reference back to A, and B has a collection of C and C has a reference back to B, etc... Now let's say I want to edit some data for an instance ofC. In Winforms, I would use something like this: var instanceOfC; using (var session = SessionFactory.OpenSession()) { // get the instance of C with Id = 3 instanceOfC = session.Linq<C>().Where(x => x.Id == 3); } SendToUIAndLetUserUpdateData(instanceOfC); using (var session = SessionFactory.OpenSession()) { // re-attach the detached entity and update it session.Update(instanceOfC); } In plain English, we grab a persistent instance out of the database, detach it, give it to the UI layer for editing, then re-attach it and save it back to the database. Problem: This works fine for Winform applications because we're using the same entity all throughout, the only difference being that it goes from persistent to detached to persistent again. The problem occurs when I'm using a web service and a browser, sending over JSON data. In this case, the data that comes back is no longer a detached entity, but rather a transient one that just happens to have the same ID as the persistent one. If I use this entity to update, it will wipe out the relationship to B and D unless I sent the entire object graph over to the UI and got it back in one piece. Question: My question is, how do I serialize detached entities over the web, receive them back, and save them, while preserving any relationships that I didn't explicitly change? I know about ISession.SaveOrUpdateCopy and ISession.Merge() (they seem to do the same thing?), but this will still wipe out the relationships if I don't explicitly set them. I could copy the fields from the transient entity to the persistent entity one by one, but this doesn't work too well when it comes to relationships and I'd have to handle version comparisons manually.

    Read the article

  • Minimum OS version number, iPhone app

    - by Michael Frost
    Hi all I've built an iPhone app which is live in the app-store. When originally submitting the app it showed up in App Store as requiring iPhone OS 3.1.3. When later updating the app I made sure my settings in Xcode for the target for the app store build had the Base SDK version set to 3.1.3 and the Deployment Target version set to 3.0, however it still shows up in app store as requiring 3.1.3. From what I've understood the Deployment Target version is the one setting the requirement in app store? Or is there any information concerning this that I should have updated in iTunes Connect when submitting the updated app? Thanks, Michael

    Read the article

  • Dojo JSON call back always returns an error

    - by Sunny
    Hi Guys, I am using Dojo and making a AJAX call to a JAVA Class and trying to get the output of the program to a Alert box to the client. var showResult = function(result){ console.log("Showing Result()"); var store = new dojo.data.ItemFileReadStore({ data: result}); console.dir(store); store.fetch( { onItem: function(data) { alert("Hie"); }, onError: function(error,request){ alert("ERROR");} }); }; This is my code, showResult basically is call back function from xhr request. I can see console.dir(store) printed onto Firebug but the fetch function always returns the onError block. My store array is of the form {info="Test Message"} and I need to retrieve "Test Message" and display it in a Alert box. Any help?

    Read the article

  • Logical Programming Problem

    - by user353060
    Hello, I've been trying to solve this problem for quite sometime but I am having trouble with it. Let's say on a trigger, you receive values. First trigger: You get 1 Second trigger: You get 1, 2 Third trigger: You get 1, 2, 3 So, I store 1. For 2nd trigger, I store 2 since 1 already exist. For 3rd trigger, I store 3 since 1,2 already exist so in total I have stored 1,2,3 As you can see, we can easily check for new values, if old != new. Here's come the problem: Fourth trigger: You get 1, 2, 4 For 4th trigger, I store 1, 2 because it exists but how do I check against 3 and remove 3 from store and check if 4 is new? If you are having problems understanding this, feel free to clarify. Thanks!

    Read the article

  • Traversing ORM relationships returns duplicate results

    - by NKing253
    I have 4 tables -- store, catalog_galleries, catalog_images, and catalog_financials. When I traverse the relationship from store --> catalog_galleries --> catalog_images in other words: store.getCatalogGallery().getCatalogImages() I get duplicate records. Does anyone know what could be the cause of this? Any suggestions on where to look? The store table has a OneToOne relationship with catalog_galleries which in turn has a OneToMany relationship with catalog_images and an eager fetch type. The store table also has a OneToMany relationship with catalog_financials.

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

  • Mobile Deals: the Consumer Wants You in Their Pocket

    - by Mike Stiles
    Mobile deals offer something we talk about a lot in social marketing, relevant content. If a consumer is already predisposed to liking your product and gets a timely deal for it that’s easy and convenient to use, not only do you score on the marketing side, it clearly generates some of that precious ROI that’s being demanded of social. First, a quick gut-check on the public’s adoption of mobile. Nielsen figures have 55.5% of US mobile owners using smartphones. If young people are indeed the future, you can count on the move to mobile exploding exponentially. Teens are the fastest growing segment of smartphone users, and 58% of them have one. But the largest demographic of smartphone users is 25-34 at 74%. That tells you a focus on mobile will yield great results now, and even better results straight ahead. So we can tell both from statistics and from all the faces around you that are buried in their smartphones this is where consumers are. But are they looking at you? Do you have a valid reason why they should? Everybody likes a good deal. BIA/Kelsey says US consumers will spend $3.6 billion this year for daily deals (the Groupons and LivingSocials of the world), up 87% from 2011. The report goes on to say over 26% of small businesses are either "very likely" or "extremely likely" to offer up a deal in the next 6 months. Retail Gazette reports 58% of consumers shop with coupons, a 40% increase in 4 years. When you consider that a deal can be the impetus for a real-world transaction, a first-time visit to a store, an online purchase, entry into a loyalty program, a social referral, a new fan or follower, etc., that 26% figure shows us there’s a lot of opportunity being left on the table by brands. The existing and emerging technologies behind mobile devices make the benefits of offering deals listed above possible. Take how mobile payment systems are being tied into deal delivery and loyalty programs. If it’s really easy to use a coupon or deal, it’ll get used. If it’s complicated, it’ll be passed over as “not worth it.” When you can pay with your mobile via technologies that connects store and user, you get the deal, you get the loyalty credit, you pay, and your receipt is uploaded, all in one easy swipe. Nothing to keep track of, nothing to lose or forget about. And the store “knows” you, so future offers will be based on your tastes. Consider the endgame. A customer who’s a fan of your belt buckle store’s Facebook Page is in one of your physical retail locations. They pull up your app, because they’ve gotten used to a loyalty deal being offered when they go to your store. Voila. A 10% discount active for the next 30 minutes. Maybe the app also surfaces social references to your brand made by friends so they can check out a buckle someone’s raving about. If they aren’t a fan of your Page or don’t have your app, perhaps they’ve opted into location-based deal services so you can still get them that 10% deal while they’re in the store. Or maybe they’ve walked in with a pre-purchased Groupon or Living Social voucher. They pay with one swipe, and you’ve learned about their buying preferences, credited their loyalty account and can encourage them to share a pic of their new buckle on social. Happy customer. Happy belt buckle company. All because the brand was willing to use the tech that’s available to meet consumers where they are, incentivize them, and show them how much they’re valued through rewards.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • EXT-js PropertyGrid best practices to achieve an update ?

    - by Tom
    Hello, I am using EXT-js for a project, usually everything is pretty straight forward with EXT-js, but with the propertyGrid, I am not sure. I'd like some advice about this piece of code. First the store to populate the property grid, on the load event: var configStore = new Ext.data.JsonStore({ // store config autoLoad:true, url: url.remote, baseParams : {xaction : 'read'}, storeId: 'configStore', // reader config idProperty: 'id_config', root: 'config', totalProperty: 'totalcount', fields: [{ name: 'id_config' }, { name: 'email_admin' } , { name: 'default_from_addr' } , { name: 'default_from_name' } , { name: 'default_smtp' } ],listeners: { load: { fn: function(store, records, options){ // get the property grid component var propGrid = Ext.getCmp('propGrid'); // make sure the property grid exists if (propGrid) { // populate the property grid with store data propGrid.setSource(store.getAt(0).data); } } } } }); here is the propertyGrid: var propsGrid = new Ext.grid.PropertyGrid({ renderTo: 'prop-grid', id: 'propGrid', width: 462, autoHeight: true, propertyNames: { tested: 'QA', borderWidth: 'Border Width' }, viewConfig : { forceFit: true, scrollOffset: 2 // the grid will never have scrollbars } }); So far so good, but with the next button, I'll trigger an old school update, and my question : Is that the proper way to update this component ? Or is it better to user an editor ? or something else... for regular grid I use the store methods to do the update, delete,etc... The examples are really scarce on this one! Even in books about ext-js! new Ext.Button({ renderTo: 'button-container', text: 'Update', handler: function(){ var grid = Ext.getCmp("propGrid"); var source = grid.getSource(); var jsonDataStr = null; jsonDataStr = Ext.encode(source); var requestCg = { url : url.update, method : 'post', params : { config : jsonDataStr , xaction : 'update' }, timeout : 120000, callback : function(options, success, response) { alert(success + "\t" + response); } }; Ext.Ajax.request(requestCg); } }); and thanks for reading.

    Read the article

  • Java Appengine APPSTATS causing java out of memory error

    - by aloo
    I have several servlets in my java appengine app that do in memory sorting and take on the order of seconds to complete. These complete error free. However, I recently enabled appstats for appengine and started receiving the following error: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Unknown Source) at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source) at java.lang.AbstractStringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:344) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:332) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printUnknownFields(TextFormat.java:249) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.print(TextFormat.java:47) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printToString(TextFormat.java:73) at com.google.appengine.tools.appstats.Recorder.makeSummary(Recorder.java:157) at com.google.appengine.tools.appstats.Recorder.makeSyncCall(Recorder.java:239) at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:98) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java:54) at com.google.appengine.api.datastore.PreparedQueryImpl.runQuery(PreparedQueryImpl.java:127) at com.google.appengine.api.datastore.PreparedQueryImpl.asQueryResultList(PreparedQueryImpl.java:81) at org.datanucleus.store.appengine.query.DatastoreQuery.fulfillEntityQuery(DatastoreQuery.java:379) at org.datanucleus.store.appengine.query.DatastoreQuery.executeQuery(DatastoreQuery.java:289) at org.datanucleus.store.appengine.query.DatastoreQuery.performExecute(DatastoreQuery.java:239) at org.datanucleus.store.appengine.query.JDOQLQuery.performExecute(JDOQLQuery.java:89) at org.datanucleus.store.query.Query.executeQuery(Query.java:1489) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1371) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at com.poo.pooserver.dataaccess.DataAccessHelper.getPooStream(DataAccessHelper.java:204) at com.poo.pooserver.GetPooStreamServlet.doPost(GetPooStreamServlet.java:58) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.tools.appstats.AppstatsFilter.doFilter(AppstatsFilter.java:92) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • In NHIbernate, why does SaveOrUpdate() update the Version, but SaveOrUpdateCopy() doesn't?

    - by Daniel T.
    I have a versioned entity, and this is what happens when I use SaveOrUpdate() vs. SaveOrUpdateCopy(): // create new entity var entity = new Entity{ Id = Guid.Empty }); Console.WriteLine(entity.Version); // prints out 0 // save the new entity GetNewSession(); entity.SaveOrUpdate(); Console.WriteLine(entity.Version); // prints out 1 GetNewSession(); // loads the persistent entity into the session, so we have to use // SaveOrUpdateCopy() to merge the following transient entity var dbEntity = Database.GetAll<Entity>(); // new, transient entity used to update the persistent entity in the session var newEntity = new Entity{ Id = Guid.Empty }); newEntity.SaveOrUpdateCopy(); Console.WriteLine(entity.Version); // prints out 1, but should be 2 Why is the version number is not updated for SaveOrUpdateCopy()? As I understand it, the transient entity is merged with the persistent entity. The SQL calls confirm that the data is updated. At this point, shouldn't newEntity become persistent, and the version number incremented?

    Read the article

  • Google App Engine ClassNotPersistenceCapableException

    - by Frank
    I have the following class : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.*; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class PayPal_Message { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private Text content; @Persistent private String time; public PayPal_Message(Text content,String time) { this.content=content; this.time=time; } public Long getId() { return id; } public Text getContent() { return content; } public String getTime() { return time; } public void setContent(Text content) { this.content=content; } public void setTime(String time) { this.time=time; } } It used to be in a package, and works fine, now I put all classes in the default package, which caused me this error : org.datanucleus.jdo.exceptions.ClassNotPersistenceCapableException: The class "The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found." is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data for the class is not found. NestedThrowables: org.datanucleus.exceptions.ClassNotPersistableException: The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found. What should I do to fix it ?

    Read the article

  • Google App Engine JDO error caused by GregorianCalendar ?

    - by Frank
    My class looks like this : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class Contact_Info implements Serializable { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) Long Id; public static final long serialVersionUID=26362862L; @Persistent String Contact_Id=""; @Persistent GregorianCalendar Date_1; public Contact_Info() { } public void setId(Long value) { Id=value; } public Long getId() { return Id; } public void setContact_Id(String value) { Contact_Id=value; } public String getContact_Id() { return Contact_Id; } public void setDate_1(GregorianCalendar value) { Date_1=value; } public GregorianCalendar getDate_1() { return Date_1; } public String toString() { return Contact_Id; } } When it's run, I got the following error : java.lang.UnsupportedOperationException org.datanucleus.store.appengine.EntityUtils.getPropertyName(EntityUtils.java:62) org.datanucleus.store.appengine.DatastoreFieldManager.storeObjectField(DatastoreFieldManager.java:839) org.datanucleus.state.AbstractStateManager.providedObjectField(AbstractStateManager.java:1037) PayPal_Monitor.Contact_Info.jdoProvideField(Contact_Info.java) PayPal_Monitor.Contact_Info.jdoProvideFields(Contact_Info.java) org.datanucleus.state.JDOStateManagerImpl.provideFields(JDOStateManagerImpl.java:2715) org.datanucleus.store.appengine.DatastorePersistenceHandler.insertPreProcess(DatastorePersistenceHandler.java:341) org.datanucleus.store.appengine.DatastorePersistenceHandler.insertObjects(DatastorePersistenceHandler.java:251) If I take out the "GregorianCalendar Date_1", it works correctly, what should I do to fix it ? I do need the date in it. Frank

    Read the article

  • When to Store Temporary Values in Hidden Field vs. Session vs. Database?

    - by viatropos
    I am trying to build a simple OpenID login panel similar to how Stack Overflow's works. The goal is: User clicks OpenID/Oauth provider OpenID/Oauth stuff happens, we end up with the result (already made that) Then we want to confirm that the user wants to actually create a new account (vs. associating account with another OpenID account). In StackOverflow, they keep a hidden field on a form that looks like this: <form action="/users/openidconfirm" method="post"> <p>This is an OpenID we haven't seen on Stack Overflow before:</p> <p class="openid-identifier">https://me.yahoo.com/a/some-hash</p> <p>Do you want to associate this OpenID with your Stack Overflow account?</p> <div> <input type="hidden" name="fkey" value="9792ab2zza1q2a4ac414casdfa137eafba7"> <input type="hidden" name="s" value="c1a3q133-11fa-49r0-a7bz-da19849383218"> <input type="submit" value="Associate OpenID"> <input type="button" value="Cancel" onclick="window.location.href = 'http://stackoverflow.com/users/169992/viatropos?s=c1a3q133-11fa-49r0-a7bz-da19849383218'"> </div> </form> Initial question is, what are those hashes fkey and s? Not that I really care what these specific hashes are, but what it seems like is happening is they have processed the openid response and saved it to the DB in a temporary object or something, and from there they generate these keys, because they don't look like Oauth keys to me. Main situation is: after I have processed OpenID/Oauth responses, I don't yet want to create a new user/account until the user submits the "confirm" form. Should I store the keys and tokens temporarily in a "Confirm" form like this? Or is there a better way? It seems that using a temp database object would be a lot of work to manage properly. Thanks for the help. Lance

    Read the article

  • JDO, GAE: Load object group by child's key

    - by tiex
    I have owned one-to-many relationship between two objects: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class AccessInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private com.google.appengine.api.datastore.Key keyInternal; ... @Persistent private PollInfo currentState; public AccessInfo(){} public AccessInfo(String key, String voter, PollInfo currentState) { this.voter = voter; this.currentState = currentState; setKey(key); // this key is unique in whole systme } public void setKey(String key) { this.keyInternal = KeyFactory.createKey( AccessInfo.class.getSimpleName(), key); } public String getKey() { return this.keyInternal.getName(); } and @PersistenceCapable(identityType = IdentityType.APPLICATION) public class PollInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent(mappedBy = "currentState") private List<AccessInfo> accesses; ... I created an instance of class PollInfo and make it persistence. It is ok. But then I want to load this group by AccessInfo key, and I am getting exception NucleusObjectNotFoundException. Is it possible to load a group by child's key?

    Read the article

  • Hudson Maven build fails using workspace POM, works when pointing to development copy

    - by Deejay
    I'm developing a series of web applications using Eclipse IDE, Maven, SVN, and Hudson for CI. When I specify the "Root POM" option in my Hudson job to be the copy of pom.xml in its workspace directory, the build fails citing compilation failure due to missing classpath entries. [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\model\User.java:[24,42] package org.hibernate.validator.constraints does not exist C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\dao\UserGroupHibernateSupportDao.java:[8,20] package org.hibernate does not exist C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\dao\UserGroupHibernateSupportDao.java:[10,49] package org.springframework.orm.hibernate3.support does not exist When I specify the "Root POM" to be the copy of pom.xml in my Eclipse workspace, it builds just fine. It builds fine from Eclipse too. I want to move Hudson over to a separate machine so several developers can use it, so I can't very well point to my own development workspace to give it a POM. If I try putting an SVN URL in the "root pom.xml" option, it says file not found. What should I be entering here for a project worked on by several developers, and hosted in an SVN repository?

    Read the article

  • Update database every 30 minutes once

    - by Ravi
    Hi, I am working on C#.Net Windows application with SQL Server 2005. This project i am using ADO.Net Data-service for database maintenance. I am working on industrial automation domain, here data keep on reading more than 8 hours. After reading data from device based on the trigger i have periodically update data to database. for example start reading data on 9.00AM, trigger firing on 9.50AM. Once trigger fire, last 30 minutes(9.20 AM to 9.50AM) data store into data base. After trigger firing keep on reading data from device and store into data base. 10.00AM trigger going to off at time storing data to database has to be stop. Again trigger firing on 11.00AM. Once trigger fire, last 30 minutes(10.30 AM to 11.00AM) data store into data base. After trigger firing keep on reading data from device and store into data base. After 10.00AM trigger not firing means data keep on store locally. Here i don't know until trigger fire keep on reading data where & how to maintain temporarily , After trigger firing, last 30 minutes data how to bring and store into database. I don't know how to achieve it. It would be great if anyone could suggest any idea. Thanks

    Read the article

  • JAXB, how to marshal without a namespace

    - by Alvin
    I have a fairly large repetitive XML to create using JAXB. Storing the whole object in the memory then do the marshaling takes too much memory. Essentially, my XML looks like this: <Store> <item /> <item /> <item /> ..... </Store> Currently my solution to the problem is to "hard code" the root tag to an output stream, and marshal each of the repetitive element one by one: aOutputStream.write("<?xml version="1.0"?>") aOutputStream.write("<Store>") foreach items as item aMarshaller.marshall(item, aOutputStream) end aOutputStream.write("</Store>") aOutputStream.close() Somehow the JAXB generate the XML like this <Store xmlns="http://stackoverflow.com"> <item xmlns="http://stackoverflow.com"/> <item xmlns="http://stackoverflow.com"/> <item xmlns="http://stackoverflow.com"/> ..... </Store> Although this is a valid XML, but it just look ugly, so I'm wonder is there any way to tell the marshaller not to put namespace for the item elements? Or is there better way to use JAXB to serialize to XML chunk by chunk?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >