Search Results

Search found 6317 results on 253 pages for 'persistent storage'.

Page 195/253 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • Timer Service in ejb 3.1 - schedule calling timeout problem

    - by Greg
    Hi Guys, I have created simple example with @Singleton, @Schedule and @Timeout annotations to try if they would solve my problem. The scenario is this: EJB calls 'check' function every 5 secconds, and if certain conditions are met it will create single action timer that would invoke some long running process in asynchronous fashion. (it's sort of queue implementation type of thing). It then continues to check, but as long as long running process is there it won't start another one. Below is the code I came up with, but this solution does not work, because it looks like asynchronous call I'm making is in fact blocking my @Schedule method. @Singleton @Startup public class GenerationQueue { private Logger logger = Logger.getLogger(GenerationQueue.class.getName()); private List<String> queue = new ArrayList<String>(); private boolean available = true; @Resource TimerService timerService; @Schedule(persistent=true, minute="*", second="*/5", hour="*") public void checkQueueState() { logger.log(Level.INFO,"Queue state check: "+available+" size: "+queue.size()+", "+new Date()); if (available) { timerService.createSingleActionTimer(new Date(), new TimerConfig(null, false)); } } @Timeout private void generateReport(Timer timer) { logger.info("!!--timeout invoked here "+new Date()); available = false; try { Thread.sleep(1000*60*2); // something that lasts for a bit } catch (Exception e) {} available = true; logger.info("New report generation complete"); } What am I missing here or should I try different aproach? Any ideas most welcome :) Testing with Glassfish 3.0.1 latest build - forgot to mention

    Read the article

  • Should XML be used server-side, and JSON client-side?

    - by Michel Carroll
    As a personal project, I'm making an AJAX chatroom application using XML as a server-side storage, and JSON for client-side processing. Here's how it works: AJAX Request gets sent to PHP using GET (chat messages/logins/logouts) PHP fetches/modifies the XML file on the server PHP encodes the XML into JSON, and sends back JSON response Javascript handles JSON information (chat messages/logins/logouts) I want to eventually make this a larger-scale chatroom application. Therefore, I want to make sure it's fast and efficient. Was this a bad design choice? In this case, is switching between XML and JSON ok, or is there a better way?

    Read the article

  • Attachment_fu: can't disable :partition option

    - by Nathan Long
    I'm trying to use the Attachment_Fu plugin in a Rails project, and want to customize the paths where uploaded files are saved. The documentation shows this option: :partition # Whether to partiton files in directories like /0000/0001/image.jpg. Default is true. (The 0001 part is an ID from a table.) I don't want that, so I set the partition option to false, like so: class Photo < ActiveRecord::Base has_attachment :content_type => :image, :storage => :file_system, :max_size => 500.kilobytes, :resize_to => '320x200', :thumbnails => {:thumb => '100x100>' }, :partition => false validates_as_attachment end ...but the :partition => false option has no effect. Has anybody else encountered this problem? How did you fix it?

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

  • MySQL storing negative and positive decimals

    - by Shishant
    Hello, I want to be able to store -11.99 and +11.99 kind of values in mysql db I am thinking of decimals instead of varchar. But reading mysql site I found out that its incompatible with older versions of mysql As a result of the change from string to numeric format for DECIMAL storage, DECIMAL columns no longer store a leading + or - character or leading 0 digits. Before MySQL 5.0.3, if you inserted +0003.1 into a DECIMAL(5,1) column, it was stored as +0003.1. As of MySQL 5.0.3, it is stored as 3.1. For negative numbers, a literal - character is no longer stored. Applications that rely on the older behavior must be modified to account for this change. So what should be the data type, If I have to give up varchar and make it compatible with older versions too?

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • How does jQuery store data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Where should I catch WM_HIBERNATE and WM_CLOSE in Windows Mobile/WinCE?

    - by afriza
    I have read about Windows Mobile's X button's behaviour, WM_HIBERNATE, and WM_CLOSE on Low Memory Situation. MSDN on WM_HIBERNATE: This message is sent to an application when system resources are running low. An application should attempt to release as many resources as possible when sent this message by unloading dialog boxes, destroying windows, or freeing up as much local storage as possible without changing the internal state. MSDN on WM_CLOSE: This message is sent as a signal that a window or an application should terminate. Where should I catch the message? in the main message pump? in every window? or only some windows? If I am using MFC, where should I catch it?

    Read the article

  • Strategy in storing ad-hoc numbers/constants?

    - by Jiho Han
    I have a need to store a number of ad-hoc figures and constants for calculation. These numbers change periodically but they are different type of values. One might be a balance, a money amount, another might be an interest rate, and yet another might be a ratio of some kind. These numbers are then used in a calculation that involve other more structured figures. I'm not certain what the best way to store these in a relational DB is - that's the choice of storage for the app. One way, I've done before, is to create a very generic table that stores the values as text. I might store the data type along with it but the consumer knows what type it is so, in situations I didn't even need to store the data type. This kind of works fine but I am not very fond of the solution. Should I break down each of the numbers into specific categories and create tables that way? For example, create Rates table, and Balances table, etc.?

    Read the article

  • How to prevent autoplay and run my own app when inserting an USB-Flash drive

    - by Thomas
    when inserting an USB-Flash drive, Windows normally opens the Autoplay dialog that offers to browse the drive or if there are multimedia files it offers to choose an app to open them. We developed a media player that is connected to the USB-Drive and registers itself as Mass Storage Device. What I need is, that when inserting the Player that this Dialog is not shown, but instead my own application is launched. Ideally the application would be on the Flash Drive itself, but as I understood is that Autorun is disabled for USB-Drives. It would be enough if a preinstalled application is launched. I already tried to catch the WM_DRIVE_CHANGE message, but this only works if my application is the top most window, otherwise the Autoplay Dialog is displayed. Best Tom

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Django gives "I/O operation on closed file" error when reading from a saved ImageField

    - by Rob Osborne
    I have a model with two image fields, a source image and a thumbnail. When I update the new source image, save it and then try to read the source image to crop/scale it to a thumbnail I get an "I/O operation on closed file" error from PIL. If I update the source image, don't save the source image, and then try to read the source image to crop/scale, I get an "attempting to read from closed file" error from PIL. In both cases the source image is actually saved and available in later request/response loops. If I don't crop/scale in a single request/response loop but instead upload on one page and then crop/scale in another page this all works fine. This seems to be a cached buffer being reused some how, either by PIL or by the Django file storage. Any ideas on how to make an ImageField readable after saving?

    Read the article

  • Help with accessing a pre-existing window AFTER opener is refreshed!

    - by Wilhelm Murdoch
    Alright, I'm at my wit's end on this issue. First, backstory. I'm working on a video management system where we're allowing users, when adding new content, to upload and, optionally, transcode a media file. We're using Java applet for the browser-based FTP client. What I want to do is allow a user to initiate an upload and then send the FTP connection instance to a popup window. This window will act as a job queue for the FTP transfer process. This will allow users to move about the main interface without having to stay on the original page until an individual file transfer is complete. For the most part I have all of this working, but here's a problem. If the window is closed, all connections are dropped and the upload process for all queued files will be canceled. So, if Window One opens the Popup Window, adds stuff to the queue, refreshes the screen or moves to a different page, how will I access the Popup Window? The popup window and its contents must remain persistent while the user navigates through the original window. The original window must be able to access the popup to add a new job to the queue. The popup window itself is independent of the opening window, so communication only happens in one direction: Parent - Popup Not Parent <- Popup Window.open(null, 'WINDOW_NAME'); will not work in this case. I need to check if a window exists BEFORE using window.open. Help!?!?

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Module not found

    - by TMP
    Hi There! I've been working on this one quite a bit and haven't gotten any closer to a solution. I juut dug up my old copy of the WindowsHookLib again - It's available with source at http://www.codeproject.com/KB/DLL/WindowsHookLib.aspx. This library allows Global Windows Mouse/Keyboard/Clipboard Hooks, which is very useful. I'm trying to use the Mouse Hook in here to Capture Mouse-Motion (I could use a Timer that always polls Cursor.Position, but I plan on using more features of WindowsHookLib later). Code as follows: MouseHook mh = new MouseHook(); mh.InstallHook(); mh.MouseMove += new EventHandler<WindowsHookLib.MouseEventArgs>(mh_MouseMove); But on the call to InstallHook(), I get an Exception: "The specified Module could not be found". Strange. Searching, I found that someone thought this occurs because a DLL is not in a place included in the Windows PATH variable, and because placing it in system32 didn't help I went the whole hog and translated the thing to C# for inclusion directly in my project (I was curious how it works). However the error was obstinately persistent, so I dug a bit on this, and found the Code in the Library that is responsible: In InstallHook(), we have IntPtr hinstDLL = Marshal.GetHINSTANCE(Assembly.GetExecutingAssembly().GetModules()[0]); this._hMouseHook = UnsafeNativeMethods.SetWindowsHookEx(14, this._mouseProc, hinstDLL, 0); if (this._hMouseHook == IntPtr.Zero) { throw new MouseHookException(new Win32Exception(Marshal.GetLastWin32Error()).Message); } And this (after modification and recompile) tells me that what I'm really getting is a Windows error "ERROR_MOD_NOT_FOUND"! Now, Here I'm stumped. Didn't I just compile the Hook Library directly into my project? (UnsafeMethods.SetWindowsHookEx is just a DllImported Method from user32) Any Answers, or Prods in the right direction, or any hints, pointers or similar are very much appreciated!

    Read the article

  • Can I use a keyPath in a predicate?

    - by dontWatchMyProfile
    For some reason, this didn't work (although the keypath does exist): NSPredicate *predicate = [NSPredicate predicateWithFormat:@"department.departmentName == %@", departmentName]; [fetchRequest setPredicate:predicate]; NSError *fetchError = nil; NSUInteger count = [moc countForFetchRequest:fetchRequest error:&fetchError]; // execution simply stops at this line, with no error or console log Execution just stops at the last line above when asking for the count. I don't get an console log. Also I don't get any kind of exception. The execution just stops. There are no objects in the persistent store yet. So maybe it crashes because of it tries to follow a keypath in a nonexisting object? Does that make sense? The line where GDB stops is this: 0x002a31cb <+0459> test %eax,%eax Previously to that, I see a lot of NSSQLAdapter... calls in the stack trace. There's definitely something wrong. Well, but when I set the Entity to the destination of the key path and then just do something like NSPredicate *predicate = [NSPredicate predicateWithFormat:@"departmentName == %@", departmentName]; then there is no problem and count simply is 0.

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • Adding items to Model Collection in ASP.NET MVC

    - by Stacey
    In an ASP.NET MVC application where I have the following model.. public class EventViewModel { public Guid Id { get; set; } public List<Guid> Attendants { get; set; } } passed to a view... public ActionResult Create(EventViewModel model) { // ... } I want to be able to add to the "Attendants" Collection from the View. The code to add it works just fine - but how can I ensure that the View model retains the list? Is there any way to do this without saving/opening/saving etc? I am calling my addition method via jQuery $.load. public void Insert(Guid attendant) { // add the attendant to the attendees list of the model - called via $.load } I would also like to add there is no database. There is no storage persistence other than plain files.

    Read the article

  • Populate properties decorated with an attribute

    - by PUT
    Are there any frameworks that assist me with this: (thinking that perhaps StructureMap can help me) Whenever I create a new instance of "MyClass" or any other class that inherits from IMyInterface I want all properties decorated with [MyPropertyAttribute] to be populated with values from a database or some other data storage using the property Name in the attribute. public class MyClass : IMyInterface { [MyPropertyAttribute("foo")] public string Foo { get; set; } } [AttributeUsage(AttributeTargets.Property)] public sealed class MyPropertyAttribute : System.Attribute { public string Name { get; private set; } public MyPropertyAttribute(string name) { Name = name; } }

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • Zend_Session: unserialize session data

    - by takeshin
    I'm using session SaveHandler to persist session data in the database. Sample session_data column from the database: Messenger|a:1:{s:13:"page_messages";a:0:{}}userSession|a:1:{s:7:"referer";s:32:"http://cms.dev/user/profile/view";}Zend_Auth|a:1:{s:7:"storage";O:19:"User_Model_Identity":3:{s:2:"id";s:1:"1";s:8:"username";s:13:"administrator";s:4:"slug";s:13:"administrator";}} I want to delete Zend_Auth object from this session data. How can I unserialize those objects and remove object I need? I suspect, that I don't have to write my custom parser, that Zend_Session already has a method to do this. I have tried different combinations of unserialize but it still returns false. I'm using autoloader from ZF 1.10.2 and Doctrine 1.2

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >