Search Results

Search found 12415 results on 497 pages for 'persistent store'.

Page 115/497 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Server side form validation and POST data

    - by tomcritchlow
    Hi, I have a user input form here: http://www.7bks.com/create (Google login required) When you first create a list you are asked to create a public username. Unfortuantely currently there is no constraint to make this unique. I'm working on the code to enforce unique usernames at the moment and would like to know the best way to do it. Tech details: appengine, python, webapp framework What I'm planning is something like this: first the /create form posts the data to /inputlist/ (this is the same as currently happens) /inputlist/ queries the datastore for the given username. If it already exists then redirect back to /create display the /create page with all the info previously but with an additional error message of "this username is already taken" My question is: Is this the best way of handling server side validation? What's the best way of storing the list details while I verify and modify the username? As I see it I have 3 options to store the list details but I'm not sure which is "best": Store the list details in the session cookie (I am using GAEsessions for cookies) Define a separate POST class for /create and post the list data back from /inputlist/ to the /create page (currently /create only has a GET class) Store the list in the datastore, even though the username is non-unique. Thank you very much for your help :) I'm pretty new to python and coding in general so if I've missed something obvious my apologies. Tom PS - I'm sure I can eventually figure it out but I can't find any documentation on POSTing data using the webapp appengine framework which I'd need in order to do solution 2 above :s maybe you could point me in the right direction for that too? Thanks! PPS - It's a little out of date now but you can see roughly how the /create and /inputlist/ code works at the moment here: 7bks.com Gist

    Read the article

  • Struts 2 how to display messages saved in a Interceptor which would redirec to another action?

    - by mui13
    in my interceptor, if user doesn't have enough right, there would be a warn message: public String intercept(ActionInvocation invocation) throws Exception { ActionContext actionContext = invocation.getInvocationContext(); Map<String, Object> sessionMap = actionContext.getSession(); User loginUser = (User) sessionMap.get("user"); Object action = invocation.getAction(); if (loginUser != null && loginUser.getRole().getId() != Constant.AUTHORITY_ADMIN) { ((ValidationAware) action).addFieldError("user.authority", ((DefaultAction) action).getText("user.action.authority.not.enough")); return DefaultAction.HOME_PAGE; } return invocation.invoke(); } then, it would redirect to "HOME_PAGE" action, if success, display information in the jsp. So how to display the warn message? i have used two interceptors configed in strust.xml, for admin right requirment: <interceptor-stack name="authorityStack"> <interceptor-ref name="authority" /> <interceptor-ref name="defaultStack" /> <interceptor-ref name="store"> <param name="operationMode">STORE</param> </interceptor-ref> </interceptor-stack> default is: <interceptor-stack name="default"> <interceptor-ref name="login" /> <interceptor-ref name="defaultStack" /> <interceptor-ref name="store"> <param name="operationMode">AUTOMATIC</param> </interceptor-ref> </interceptor-stack>

    Read the article

  • Licensing iPhone apps per user in existing system

    - by Alxandr
    I've been asked by my job to write a iPhone app for an existing system for managing worktasks. This system is proprietary and costs money, so in order to login you need to be a customer. Now, I've got two questions about the legality of licensing iPhone apps with this system: My company would like to be able to sell the app for profit, but not as a one-time payment, but as a added subscription-fee to the already existing one. Is it legal for us (according with the terms of distributing an iPhone app on the Apple App Store) to do this? That way we'll just add another field to the users-database saying weather or not iPhone is enabled for them, and distribute the app as a free app on App Store. If the previous question is not legal, we'd like to just create a free app and distribute it as part of the existing system. In other words, no extra fee for using the iPhone app for the users, but still free distribution trough App Store. Due to our company not being american or having an office in the U.S. at all enterprice account is not an option. Please let me know if there is anything wrong with any of the above approaches.

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Blackberry application works in simulator but not device

    - by Kai
    I read some of the similar posts on this site that deal with what seems to be the same issue and the responses didn't really seem to clarify things for me. My application works fine in the simulator. I believe I'm on Bold 9000 with OS 4.6. The app is signed. My app makes an HTTP call via 3G to fetch an XML result. type is application/xhtml+xml. In the device, it gives no error. it makes no visual sign of error. I tell the try catch to print the results to the screen and I get nothing. HttpConnection was taken right out of the demos and works fine in sim. Since it gives no error, I begin to reflect back on things I recall reading back when the project began. deviceside=true? Something like that? My request is simply HttpConnection connection = (HttpConnection)Connector.open(url); where url is just a standard url, no get vars. Based on the amount of time I see the connection arrows in the corner of the screen, I assume the app is launching the initial communication to my server, then either getting a bad result, or it gets results and the persistent store is not functioning as expected. I have no idea where to begin with this. Posting code would be ridiculous since it would be basically my whole app. I guess my question is if anyone knows of any major differences with device versus simulator that could cause something like http connection or persistent store to fail? A build setting? An OS restriction? Any standard procedure I may have just not known about that everyone should do before beginning device testing? Thanks

    Read the article

  • Building a TriggerField with PopUp Window

    - by umit_alba
    Hi! I built a triggerField and when i press at it, i want to have a popup, that is appended to the button in the triggerfield(so when i click anywhere else it shall disappear and it shall pop out up to the button when i click at the button just like a datepicker-popup) I somehow managed to do something like that with an Ext.window but the offset and postion doesnt match. This all should be contained in a row editor. my Code: new Ext.grid.GridPanel({ store: Store, region:'center', height:150, //minWidth:700, autoScroll:true, listeners:{}, plugins:[new Ext.ux.grid.RowEditor()], tbar: [{ iconCls: 'icon-user-add', text: ' hinzufügen', handler: function(){ alert("abc"); } },{ ref: '../removeBtn', iconCls: 'icon-user-delete', text: 'löschen', disabled: true, handler: function(){ editor.stopEditing(); var s = grid.getSelectionModel().getSelections(); for(var i = 0, r; r = s[i]; i++){ store.remove(r); } } }], columns: [{ header: 'Monate', dataIndex: 'MONAT', width: 50, sortable: true, editor: new Ext.form.TriggerField({"id":"EditorMonate",items:[], "onTriggerClick":function(thiss){ if(!Ext.getCmp("autoWMonate")){ var monate=new Ext.Window({"x":Ext.getCmp("EditorMonate").x,closeAction:"hide",width:275,id:"autoWMonate",layout:"table",layoutConfig: {columns: 10}}); var text; for(var mon=1;mon<13;mon++){ text=mon; mon?mon:text="0"; if(mon<10) text="0"+mon; monate.items.add( new Ext.Button({cls:"x-btn",value:parseInt(text),selected:true,"text":text,id:text }}}));} } Ext.getCmp("autoWMonate").hidden?Ext.getCmp("autoWMonate").show():Ext.getCmp("autoWMonate").hide(); }}) } }] })

    Read the article

  • C++ ofstream cannot write to file....

    - by user69514
    Hey I am trying to write some numbers to a file, but when I open the file it is empty. Can you help me out here? Thanks. /** main function **/ int main(){ /** variables **/ RandGen* random_generator = new RandGen; int random_numbers; string file_name; /** ask user for quantity of random number to produce **/ cout << "How many random number would you like to create?" << endl; cin >> random_numbers; /** ask user for the name of the file to store the numbers **/ cout << "Enter name of file to store random number" << endl; cin >> file_name; /** now create array to store the number **/ int random_array [random_numbers]; /** file the array with random integers **/ for(int i=0; i<random_numbers; i++){ random_array[i] = random_generator -> randInt(-20, 20); cout << random_array[i] << endl; } /** open file and write contents of random array **/ const char* file = file_name.c_str(); ofstream File(file); /** write contents to the file **/ for(int i=0; i<random_numbers; i++){ File << random_array[i] << endl; } /** close the file **/ File.close(); return 0; /** END OF PROGRAM **/ }

    Read the article

  • table design for storing large number of rows

    - by hyperboreean
    I am trying to store in a postgresql database some unique identifiers along with the site they have been seen on. I can't really decide which of the following 3 option to choose in order to be faster and easy maintainable. The table would have to provide the following information: the unique identifier which unfortunately it's text the sites on which that unique identifier has been seen The amount of data that would have to hold is rather large: there are around 22 millions unique identifiers that I know of. So I thought about the following designs of the table: id - integer identifier - text seen_on_site - an integer, foreign key to a sites table This approach would require around 22 mil multiplied by the number of sites. id - integer identifier - text seen_on_site_1 - boolean seen_on_site_2 - boolean ............ seen_on_site_n - boolean Hopefully the number of sites won't go past 10. This would require only the number of unique identifiers that I know of, that is around 20 millions, but it would make it hard to work with it from an ORM perspective. one table that would store only unique identifiers, like in: id - integer unique_identifier - text, one table that would store only sites, like in: id - integer site - text and one many to many relation, like: id - integer, unique_id - integer (fk to the table storing identifiers) site_id - integer (fk to sites table) another approach would be to have a table that stores unique identifiers for each site So, which one seems like a better approach to take on the long run?

    Read the article

  • Google App Engine how to get an object from the servlet ?

    - by Frank
    I have the following class objects in Google App Engine's dadastore, I can see them from the "Datastore Viewer " : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class Contact_Info_Entry implements Serializable { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) Long Id; public static final long serialVersionUID=26362862L; String Contact_Id="",First_Name="",Last_Name="",Company_Name="",Branch_Name="",Address_1="",Address_2="",City="",State="",Zip="",Country=""; double D_1,D_2; boolean B_1,B_2; Vector<String> A_Vector=new Vector<String>(); public Contact_Info_Entry() { } ...... } How can my java applications get the object from a servlet url ? For instance if have an instance of Contact_Info_Entry who's Contact_Id is "ABC-123", and my App Id is : nm-java When my java program accesses the url : "http://nm-java.appspot.com/Check_Contact_Info?Contact_Id=ABC-123 How will the Check_Contact_Info servlet get the object from datastore and return it to my app ? public class Check_Contact_Info_Servlet extends HttpServlet { static boolean Debug=true; public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { } ... protected void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { doGet(request,response); } } Frank

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • How to use associated Model as datasource for DataView

    - by Chris Gilbert
    I have a Model structure as shown below and I want to know how to use the Bookings array as the datasource of my DataView. Model Structure: Client ClientId Name Bookings (HasManyAssociation) Contacts (HasManyAssociation) AjaxProxy JsonReader (ImplicitIncludes is set to true so child models are created with one call) Booking BookingNodeId BookingDetails Contact ContactNodeId ContactDetails The above gives me a data structure as follows: Client Bookings[ Booking Booking ] Contacts[ Contact Contact ] What I want to be able to do is either, create a Store from my Bookings array and then use that store as the datasource for my DataView OR directly use the Bookings array as the datasource (I don't really care how I do it to be honest). If I setup the AjaxProxy on my Booking model it works fine but then obviously I cannot automatically create my Client and Contacts when I load my JSON. It seems to me to make sense that the Client model, being the top level model hierarchically, is the one to load the data. EDIT: I figured it out as follows (with special thanks to handet87 below for his dataview.setStore() pointer). The key in this case is to know that creating the relationship actually sets up another store called, in this case BookingsStore and ContactsStore. All I needed to do was dataview.setStore("BookingsStore")

    Read the article

  • In this example, would Customer or AccountInfo properly be the entity group parent?

    - by Badhu Seral
    In this example, the Google App Engine documentation makes the Customer the entity group parent of the AccountInfo entity. Wouldn't AccountInfo encapsulate Customer rather than the other way around? Normally I would think of an AccountInfo class as including all of the information about the Customer. import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.google.appengine.api.datastore.KeyFactory; @PersistenceCapable public class AccountInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; public void setKey(Key key) { this.key = key; } } // ... KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Customer.class.getSimpleName(), "custid985135"); keyBuilder.addChild(AccountInfo.class.getSimpleName(), "acctidX142516"); Key key = keyBuilder.getKey(); AccountInfo acct = new AccountInfo(); acct.setKey(key); pm.makePersistent(acct);

    Read the article

  • Migrating from a single entity to an abstract parent entity with child entities, NSEntityMigrationPolicy not called.

    - by Jimmy Selgen Nielsen
    Hi. I'm trying to upgrade my current application to use an abstract parent entity, with specialized sub entities. I've created a custom NSEntityMigrationPolicy, and in the mapping model I've set the Custom Policy to the name of my class. I'm initializing my persistent store like this, which should be fairly standard : NSError *error=nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, nil]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Error adding persistent store : %@",[error description]); NSAssert(error==nil,[error localizedDescription]); } When i run the app i get the following error : Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'The operation couldn’t be completed. (Cocoa error 134140.)' [error userInfo] contains "reason=Can't find mapping model for migration" I've verified that version 1 of the data model will open, and if i set NSInferMappingModelAutomaticallyOption i get a migration, although my entities are not migrated correctly (as expected). I've verified that the mapping model (cdm) is in the application bundle, but somehow it refuses to find it. I've also set breakpoints and NSLog() statements in the custom migration policy, and none of it runs, with or without NSInferMappingModelAutomaticallyOption Any hints as to why it seems unable to find the mapping model ?

    Read the article

  • Is it a good idea to use MySQL and Neo4j together?

    - by Sanoj
    I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?

    Read the article

  • Ruby and RSS2 Feed not displaying image

    - by pcasa
    Trying to create a simple RSS2 Feed that I could later pass on to FeedBurner but can't get RSS feed to display images at all. Also, from what I have read having xml.instruct! on top might cause IE to complain it's not a valid feed. Is this true? My Code looks like xml.instruct! xml.rss "version" => "2.0", "xmlns:dc" => "http://purl.org/dc/elements/1.1/" do xml.channel do xml.title "Store" xml.link url_for :only_path => false, :controller => 'products' xml.description "Store" xml.pubDate @products.first.updated_at.rfc822 if @products.any? @products.each do |product| xml.item do xml.title product.name xml.pubDate (product.updated_at.rfc822) xml.image do xml.url domain_host + product.product_image.url(:small) xml.title "Store" xml.link url_for :only_path => false, :controller => 'products' end xml.link url_for :only_path => false, :controller => 'products', :action => 'show', :id => product.permalink xml.description product.fine_print xml.guid url_for :only_path => false, :controller => 'products', :action => 'show', :id => product.permalink end end end end

    Read the article

  • Big trouble after app update. CoreData migration error

    - by MrBr
    this morning we had a big trouble with our iphone app. We had to even take it off the store. The thing is that we made real small changes to our xcdatamodel. We thought that the update process is automatically taking care about exchanging it the right way until we found out something like CoreData migration exists. We are using the UIManagedDocument to connect to the persistent store. How is it possible to exchange this file with the new one? While we were developing we just uninstalled the whole app from the device and then installed it again and everything worked. How can we simulate this process in the app store with updates? UPDATE I try to set the migration option like this _database = [[UIIManagedDocument alloc] init]; NSMutableDictionary *options = [[NSMutableDictionary alloc] init]; [options setObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption], _database.persistentStoreOptions = options; but the app is still crashing with ** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'This NSPersistentStoreCoordinator has no persistent stores. It cannot perform a save operation.'

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • Storing header and data sections in a CSV file

    - by morpheous
    This should be relatively easy to do, but after several hours straight programming my mind seems a bit frazzled and could do with some help. I have a C++ class which I am currently using to store read/write data to file. I was initially using binary data, but have decided to store the data as CSV in order to let programs written in other languages be able to load the data. The C++ class looks a bit like this: class BinaryData { public: BinaryData(); void serialize(std::ostream& output) const; void deserialize(std::istream& input); private: Header m_hdr; std::vector<Row> m_rows; }; I am simply rewriting the serialize/deserialize methods to write to a CSV file. I am not sure on the "best" way to store a header section and a "data" section in a "flat" CSV file though - any suggestions on the most sensible way to do this?

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • ASP.NET server data persistence

    - by Wayne Werner
    Hi, I'm not really sure exactly how the question should be phrased, so please be patient if I ask the wrong thing. I'm writing an ASP.NET application using VB as the code behind language. I have a data access class that connects to the DB to run the query (parameterized, of course), and another class to perform the validation tasks - I access this class from my aspx page. What I would like is to be able to store the data server side and wait for the user to choose from a few options based on the validity of the data. But unless my understanding is completely off, having persistent data objects on the server will give problems when multiple users connect? My ultimate goal is that once the data has been validated the end user can't modify it. Currently I'm validating the data, but I still have to retrieve it from the web form AFTER the user says OK, which obviously leaves open the possibility of injecting bad data either accidentally (unlikely) or on purpose (also unlikely for the use, but I'd prefer not to take the chance). So am I completely off in my understanding? If so, can someone point me to a resource that provides some instructions on keeping persistent data on the server, or provide instruction? Thanks!

    Read the article

  • How do you work around memcached's key/value limitations?

    - by mjy
    Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number") and in that case, I store <number parts with keys named 1+<main key, 2+<main key etc.. This seems "OK" (but messy) for some cases, not so good for others and it has the intrinsic problem that some of the parts might be missing at any time (so space is wasted for keeping the others and time is wasted reading them). As for the key limitations, one could probably implement hashing and store the full key (to work around collisions) in the value, but I haven't needed to do this yet. Has anyone come up with a more elegant way, or even a Perl API that handles arbitrary data sizes (and key values) transparently? Has anyone hacked the memcached server to support arbitrary keys/values?

    Read the article

  • Google App Engine datastore encoding?

    - by sernaferna
    I'm using the GAE datastore for a Java application, and storing some text that will be in numerous languages. In my servlet, I'm first checking to see if there's any data in the data store, and, if not, I'm creating some, similar to the following: ArrayList<Lang> list = new ArrayList<Lang>(); list.add(new Lang("EN", "English", 1)); list.add(new Lang("ES", "Español", 0)); //more languages here... PersistenceManager pm = PMF.get().getPersistenceManager(); for(Lang l : list) { pm.makePersistent(l); } Since this is using JDO, I guess I should include the relevent parts of the Lang class too: @PersistenceCapable public class Lang { @PrimaryKey private String code; @Persistent private String name; @Persistent private int popularity; // getters & setters & constructors... } However, the non-ASCII characters are giving me grief. I've set my Eclipse project to use the UTF-8 encoding instead of the default Cp1252, so I think I'm okay from that perspective, but when I use the App Engine Data Viewer to look at my data, that Español entry becomes Español, and when I click on it to view it, I get a 500 Server Error. (There are some other entries with right-to-left text that don't even show up in the Data Viewer at all, but one problem at a time...) Is there anything special I can do in my code to set the character encoding, or specify to GAE that the data I'm storing is UTF-8? Or is the problem on the Eclipse side, and is there something I should be doing with my Java code?

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >