Search Results

Search found 7222 results on 289 pages for 'storage cells'.

Page 223/289 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Adding items to Model Collection in ASP.NET MVC

    - by Stacey
    In an ASP.NET MVC application where I have the following model.. public class EventViewModel { public Guid Id { get; set; } public List<Guid> Attendants { get; set; } } passed to a view... public ActionResult Create(EventViewModel model) { // ... } I want to be able to add to the "Attendants" Collection from the View. The code to add it works just fine - but how can I ensure that the View model retains the list? Is there any way to do this without saving/opening/saving etc? I am calling my addition method via jQuery $.load. public void Insert(Guid attendant) { // add the attendant to the attendees list of the model - called via $.load } I would also like to add there is no database. There is no storage persistence other than plain files.

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • Populate properties decorated with an attribute

    - by PUT
    Are there any frameworks that assist me with this: (thinking that perhaps StructureMap can help me) Whenever I create a new instance of "MyClass" or any other class that inherits from IMyInterface I want all properties decorated with [MyPropertyAttribute] to be populated with values from a database or some other data storage using the property Name in the attribute. public class MyClass : IMyInterface { [MyPropertyAttribute("foo")] public string Foo { get; set; } } [AttributeUsage(AttributeTargets.Property)] public sealed class MyPropertyAttribute : System.Attribute { public string Name { get; private set; } public MyPropertyAttribute(string name) { Name = name; } }

    Read the article

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • Multi-process builds in Visual Studio 2010: Worth it?

    - by coryr
    I've started testing our C++ software with VS2010 and the build times are really bad (30-45 minutes, about double the VS2005 times). I've been reading about the /MP switch for multi-process compilation. Unfortunately, it is incompatible with some features that we use quite a bit like #import, incremental compilation, and precompiled headers. Have you had a similar project where you tried the /MP switch after turning off things like precompiled headers? Did you get faster builds? My machine is running 64-bit Windows 7 on a 4 core machine with 4 GB of RAM and a fast SSD storage. Virus scanner disabled and a pretty minimal software environment.

    Read the article

  • Zend_Session: unserialize session data

    - by takeshin
    I'm using session SaveHandler to persist session data in the database. Sample session_data column from the database: Messenger|a:1:{s:13:"page_messages";a:0:{}}userSession|a:1:{s:7:"referer";s:32:"http://cms.dev/user/profile/view";}Zend_Auth|a:1:{s:7:"storage";O:19:"User_Model_Identity":3:{s:2:"id";s:1:"1";s:8:"username";s:13:"administrator";s:4:"slug";s:13:"administrator";}} I want to delete Zend_Auth object from this session data. How can I unserialize those objects and remove object I need? I suspect, that I don't have to write my custom parser, that Zend_Session already has a method to do this. I have tried different combinations of unserialize but it still returns false. I'm using autoloader from ZF 1.10.2 and Doctrine 1.2

    Read the article

  • Consolidate loan, purchase & sale tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Each transaction will be assigned a unique transaction number [serial]. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user-wise to have all types of transactions under one table, whenever possible. This implies denormalization.

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • How to prevent autoplay and run my own app when inserting an USB-Flash drive

    - by Thomas
    when inserting an USB-Flash drive, Windows normally opens the Autoplay dialog that offers to browse the drive or if there are multimedia files it offers to choose an app to open them. We developed a media player that is connected to the USB-Drive and registers itself as Mass Storage Device. What I need is, that when inserting the Player that this Dialog is not shown, but instead my own application is launched. Ideally the application would be on the Flash Drive itself, but as I understood is that Autorun is disabled for USB-Drives. It would be enough if a preinstalled application is launched. I already tried to catch the WM_DRIVE_CHANGE message, but this only works if my application is the top most window, otherwise the Autoplay Dialog is displayed. Best Tom

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • How to initialize an std::string using ""?

    - by Mohsin
    I'm facing problems with initializing a std::string variable using "" (i.e. an empty string). It's causing strange behavior in code that was previously working. Is the following statement wrong? std::string operationalReason = ""; When I use the following code everything works fine: std::string operationalReason; operationalReason.clear(); I believe that string literals are stored in a separate memory location that is compiler-dependent. Could the problem I'm seeing actually be indicating a corruption of that storage? If so, it would get hidden by my usage of the clear() function. Thanks.

    Read the article

  • Attachment_fu: can't disable :partition option

    - by Nathan Long
    I'm trying to use the Attachment_Fu plugin in a Rails project, and want to customize the paths where uploaded files are saved. The documentation shows this option: :partition # Whether to partiton files in directories like /0000/0001/image.jpg. Default is true. (The 0001 part is an ID from a table.) I don't want that, so I set the partition option to false, like so: class Photo < ActiveRecord::Base has_attachment :content_type => :image, :storage => :file_system, :max_size => 500.kilobytes, :resize_to => '320x200', :thumbnails => {:thumb => '100x100>' }, :partition => false validates_as_attachment end ...but the :partition => false option has no effect. Has anybody else encountered this problem? How did you fix it?

    Read the article

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so damned hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • Add C pointer to NSMutableArray

    - by Georges Oates Larsen
    I am writing an Objective-C program that deals with low level image memory. I am using ANSI-C structs for my data storage -- Full blown objects seem overkill seeing as the data I am storing is 100% data, with no methods to operate on that data. Specifically, I am writing a customizable posterization algorithm which relies on an array of colors -- This is where things get tricky. I am storing my colors as structs of three floats, and an integer flag (related to the posterization algorithm specifically). Everyhting is going well, except for one thing... [actual question] I can't figure out how to add pointers to an NSMutableArray! I know how to add an object, but adding a pointer to a struct seems to be more difficult -- I do not want NSMutableArray dereferencing my pointer and treating the struct as some sort of strange object. I want NSMutableArray to add the pointer its self to its collection. How do I go about doing this? Thanks in advance, G

    Read the article

  • Can expiration policies be configured in entlib caching application block?

    - by stesoc
    Hi, Is there a way to tell a CacheManager that every item added will have the same expiration policy? For example in: <cachingConfiguration defaultCacheManager="DefaultCacheManager"> <cacheManagers> <add name="TestCM" expirationPollFrequencyInSeconds="60" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="Null Storage" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> I expected to have some attribute like expirationPolicy="AbsoluteTime" or "SlidingTime" and a expirationValue="..." for specifying the timespan to use. Thanks, s.

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • Hopefully simple topic to spark some good opinions, Question is MySQL or SQL Server???

    - by magellings
    I'm beginning development of a website and a high priority is for it to be extremely optimized, quick responses, etc. There will ultimately end up being large amounts of rows in the main tables (millions), so scalability is also important. It will need to use a database on the back-end for data storage and my web hosting service supports either MySQL or Sql Server. This website will be developed with .NET ASP.NET MVC with NHibernate (hopefully it can run in medium trust mode, as that is a requirement of my web hosting and reflection requirements of NHibernate may be problematic, maybe someone has a comment on this too). I'd also prefer to use the database that will require the least attention in regards to management. I don't want to have to be a DBA here. :) I wanted to through this topic out to the public to see what the community thinks? So MySQL or Sql Server, generally, which one would be better to use?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Microsoft Access to SQL Server - synchronization

    - by David Pfeffer
    I have a client that uses a point-of-sale solution involving an Access database for its back-end storage. I am trying to provide this client with a service that involves, for SLA reasons, the need to copy parts of this Access database into tables in my own database server which runs SQL Server 2008. I need to do this on a periodic basis, probably about 5 times a day. I do have VPN connectivity to the client. Is there an easy programmatic way to do this, or an available tool? I don't want to handcraft what I assume is a relatively common task.

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • GWT as offline app, to be deployed onto an iPad

    - by Maroloccio
    I often use GWT for web UIs. I have heard of it being used a fair bit in conjunction with Gears for offline solutions (probably nowadays HTML5 "offline storage" is all the rage) and I'd like to experiment with building a GUI in GWT and use it on my iPad. Tips/tutorials on how to deploy it onto the device to act as much as possible like a resident "App"? This is just a curiosity/experiment to fill a week-end... (I can "free" the iPad for the experiment if need be yet I am sure a lot can be done without doing so...)

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >