Search Results

Search found 5474 results on 219 pages for 'tiered storage'.

Page 167/219 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Multi-process builds in Visual Studio 2010: Worth it?

    - by coryr
    I've started testing our C++ software with VS2010 and the build times are really bad (30-45 minutes, about double the VS2005 times). I've been reading about the /MP switch for multi-process compilation. Unfortunately, it is incompatible with some features that we use quite a bit like #import, incremental compilation, and precompiled headers. Have you had a similar project where you tried the /MP switch after turning off things like precompiled headers? Did you get faster builds? My machine is running 64-bit Windows 7 on a 4 core machine with 4 GB of RAM and a fast SSD storage. Virus scanner disabled and a pretty minimal software environment.

    Read the article

  • Add C pointer to NSMutableArray

    - by Georges Oates Larsen
    I am writing an Objective-C program that deals with low level image memory. I am using ANSI-C structs for my data storage -- Full blown objects seem overkill seeing as the data I am storing is 100% data, with no methods to operate on that data. Specifically, I am writing a customizable posterization algorithm which relies on an array of colors -- This is where things get tricky. I am storing my colors as structs of three floats, and an integer flag (related to the posterization algorithm specifically). Everyhting is going well, except for one thing... [actual question] I can't figure out how to add pointers to an NSMutableArray! I know how to add an object, but adding a pointer to a struct seems to be more difficult -- I do not want NSMutableArray dereferencing my pointer and treating the struct as some sort of strange object. I want NSMutableArray to add the pointer its self to its collection. How do I go about doing this? Thanks in advance, G

    Read the article

  • Can expiration policies be configured in entlib caching application block?

    - by stesoc
    Hi, Is there a way to tell a CacheManager that every item added will have the same expiration policy? For example in: <cachingConfiguration defaultCacheManager="DefaultCacheManager"> <cacheManagers> <add name="TestCM" expirationPollFrequencyInSeconds="60" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="Null Storage" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> I expected to have some attribute like expirationPolicy="AbsoluteTime" or "SlidingTime" and a expirationValue="..." for specifying the timespan to use. Thanks, s.

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • Consolidate loan, purchase & sale tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Each transaction will be assigned a unique transaction number [serial]. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user-wise to have all types of transactions under one table, whenever possible. This implies denormalization.

    Read the article

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so damned hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Hopefully simple topic to spark some good opinions, Question is MySQL or SQL Server???

    - by magellings
    I'm beginning development of a website and a high priority is for it to be extremely optimized, quick responses, etc. There will ultimately end up being large amounts of rows in the main tables (millions), so scalability is also important. It will need to use a database on the back-end for data storage and my web hosting service supports either MySQL or Sql Server. This website will be developed with .NET ASP.NET MVC with NHibernate (hopefully it can run in medium trust mode, as that is a requirement of my web hosting and reflection requirements of NHibernate may be problematic, maybe someone has a comment on this too). I'd also prefer to use the database that will require the least attention in regards to management. I don't want to have to be a DBA here. :) I wanted to through this topic out to the public to see what the community thinks? So MySQL or Sql Server, generally, which one would be better to use?

    Read the article

  • Using a password to generate two distinct hashes without reducing password security

    - by Nevins
    Hi there, I'm in the process of designing a web application that will require the storage of GPG keys in an encrypted format in a database. I'm planning on storing the user's password in a bCrypt hash in the database. What I would like to be able to do is to use that bCrypt to authenticate the user then use the combination of the stored bCrypt hash and another hash of the password to encrypt and decrypt the GPG keys. My question is whether I can do this without reducing the security of the password? I was thinking I may be able to use something like an HMAC-SHA256 of a static string using the password and a salt as the secret key. Is there a better way to do this that I haven't thought of? Thanks

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • GWT as offline app, to be deployed onto an iPad

    - by Maroloccio
    I often use GWT for web UIs. I have heard of it being used a fair bit in conjunction with Gears for offline solutions (probably nowadays HTML5 "offline storage" is all the rage) and I'd like to experiment with building a GUI in GWT and use it on my iPad. Tips/tutorials on how to deploy it onto the device to act as much as possible like a resident "App"? This is just a curiosity/experiment to fill a week-end... (I can "free" the iPad for the experiment if need be yet I am sure a lot can be done without doing so...)

    Read the article

  • Microsoft Access to SQL Server - synchronization

    - by David Pfeffer
    I have a client that uses a point-of-sale solution involving an Access database for its back-end storage. I am trying to provide this client with a service that involves, for SLA reasons, the need to copy parts of this Access database into tables in my own database server which runs SQL Server 2008. I need to do this on a periodic basis, probably about 5 times a day. I do have VPN connectivity to the client. Is there an easy programmatic way to do this, or an available tool? I don't want to handcraft what I assume is a relatively common task.

    Read the article

  • Choosing a Portal / CMS software for developing multi brand websites?

    - by hbagchi
    We are in the early stage of overhauling a multi-brand website built using a custom developed java mvc framework to enable web 2.0 features. Built-in features we are looking at are: i18n, sso, content search and indexing, personalization, mashup support, ajax support, rich media content storage and management support, friendly to search engine optimizations, bookmarkable URLs, support for social networking sites, support for page composition and decoration using templates. A combination of these features are supported by many portal and cms software. Any insights will be very helpful in using a portal/cms combination to address this requirements! This is a follow-up on this post focusing on the portal/cms angle

    Read the article

  • svn working copy spanning 2 physical drives?

    - by ronenosity
    I have a large repository hosted on dreamhost that I backup daily on a remote machine using a windows scheduled task which updates the working copy located on an external USB 300GB drive connected to the remote machine. The 300GB drive is nearly full with only 26GB of free space remaining. Recently I added a second USB external 1TB drive to increase storage capacity. I would like to ask: what is the best way to use the new 1TB drive? Is it possible to span the working copy across both drives (ex: somehow create a 1.3 TB drive)? If I copied the working copy from the 300GB to the 1TB would svn continue form do I need to retarget the update working copy script to the new 1TB drive and start downloading everything again (least desired option) thanks

    Read the article

  • Free and Simple Hosting Solution with DB Support?

    - by darren
    Hi everyone I have compiled an sqlite database that I would like to make public. I want to provide a very simple webpage that has a few form elements which are used to query the database. Does anybody know of a free hosting solution that would provide me with free web and database functionality? I am familiar with Google App Engine and that would work but I don't want to spend time using their custom storage scheme. I'm looking for something free that I can put together very quickly. Thanks for any suggestions.

    Read the article

  • c arrays: setting size dynamically?

    - by user336994
    Hello, I am new to C programming. I am trying to set the size of the array using a variable but I am getting an error: Storage size of 'array' isn't constant !! 01 int bound = bound*4; 02 static GLubyte vertsArray[bound]; I have noticed that when I replace bounds (within the brackets on line 02) with the number say '20', the program would run with no problems. But I am trying to set the size of the array dynamically ... Any ideas why I am getting this error ? thanks much,

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • Is it possible to transfer data between html pages driven by spring web flow?

    - by Easwaramoorthy Kanagaraj
    I am aware of passing data between jsp in spring web flow. Is it possible to transfer data between html pages driven by spring web flow. I don't want to use the HTML5 local storage capabilities. Example: Page 1: Search box for an employee id. Page 2: Search result for the employee details. Two ways that I could think: Get the employee details in page 1 by ajax and pass the result to the page two. Pass the employee id to page 2 and get the result by ajax in onload. In both case I need to pass any variable/data. I am confused in doing this. Is there anything in the Spring webflow using which I could do this? Thanks in advance, Easwar

    Read the article

  • Getting the Video file Stored on DVR by the IP Camera in BlackBerry

    - by sHaH..
    HI all.. i need help of you guys.. Well i am making an application which will display the live video from the IP camera and display it on my screen. well my friends now the help i required is that many camera also save their videos onto the DVR.. now how can i access the DVR and get the video stored onto that storage medi.. i need some helping material or guide from you all.. since m new in black berry... Thnks a bunch in advance.. i m waiting for ur +ve responce..

    Read the article

  • InnoDB not supported by webhost. What now?

    - by Peter Perhác
    I was developing a small WAMP web application on my laptop, where I have an instance of mySQL running and I chose InnoDB for my DB engine. After several weeks' development I wanted to make it available to the public and found out the database server provided by my web host does not support InnoDB, only MyISAM. The create-and-populate script generated from the innoDB schema on my laptop, when executed against the live database, can manage to create individual TABLES but then runs into problems creating the VIEWs. Are views not supported in MyISAM? I know FOREIGN KEYs are not. That's very much why I made the choice of InnoDB... What are my chances of making my innoDB schema design work with myISAM? Is there any straightforward way of converting the whole schema from one storage engine to the other? Should I look for another web host that does provide a mysql db that supports innoDB?

    Read the article

  • Does Exchange have ability to run hidden mailboxes?

    - by MadBoy
    Hello, Title of my question may sound a little bit odd but I was thinking if Exchange 2010 or 2007 or any program that would work in conjunction with Exchange has ability to create this structure: Users having their normal mailboxes connected and using them as everyone would in Outlook 2003/2007/2010. Users having additional mailboxes (from old Exchange 2003) attached but hidden on demand of Administrator. For example administrator could easy disable them just like they never been attached making them invisible to users and everyone else. Would be good if such mailboxes could be easily removed out of system (lets say on external drive) by simple step not manual job for 100 mailboxes. Users without ability to copy/move their mails to outside storage (like a local .pst file)? Do you guys have any suggestions on this? I was thinking maybe using public folders but this seems like overkill and not really suited for this. And please don't ask me why I need this type of security (it's not something I requested).

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >