Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 224/1641 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • What are pinned objects?

    - by sagie
    Hi. I am trying to find a memory leak using ants memory profiler, and I've encountered in a new term: Pinned objects. Can some one give me a good & simple explanation about what this objects are, How can I pinn/Unpinn objects, and detect who pinned objects? Thanks

    Read the article

  • What will be best strategy for designing database for a magazine online?

    - by Kaveh
    Hello; I have design a magazine online and worry that is it the best approch or at least a good approch or no,the magazine contains articles+news in all subjects, i have one table for both articles+news ,but i would like to know that is this good or i must separate articles and news (it is clear that beside the main table there are several tables for categories ,tags,and photos and tables for relation between these tables)? Thanks

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • Getting an alert when my oracle database goes up or down

    - by CodeSlave
    How can I get an e-mail alert when my oracle database comes up or down? I have a database that I need to know when it goes down (it would be nice to know if it has come back up), preferably from a remote machine. Conceivably I could hack together something that TNSPings my DB and e-mails me when that changes, but I'm hoping there's a free package out there. Something that would run on windows. Any strong recommendations?

    Read the article

  • How to retrieve the size of a database in restoring mode

    - by Marc Wittke
    With pure SQL - how to do it? I doubt it is possible, since even the SQL Management Studio fails to show the size of such a database in the UI. Already tried: exec sp_helpdb 'DbInRecMode' ...won't show anything; exec sys.sp_helpfile 'DbInRecMode' ... something like file not found (Msg 15325) Main pitfall seems to be the issue, that select * from DbInRecMode.dbo.sysfiles won't work when the database is in restoring mode. Any ideas?

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • Two different definitions of database schema

    - by AspOnMyNet
    a) I found two definitions of schema: FIRST - A set of information that describes a table is known as a schema, and schemas are used to describe specific tables within a database, as well as entire databases (and the relationship between tables in them, if any). SECOND - A database schema is a way to logically group objects such as tables, views, stored procedures etc. Think of a schema as a container of objects. I assume the two descriptions describe entirely different concepts, which just happen to use the same name? b) A database schema is a way to logically group objects such as tables, views, stored procedures etc. Think of a schema as a container of objects. If I understand the above definition correctly, then database schema is similar to a namespace, only difference being that we can assign access permissions to database schema, while same can’t be done with namespaces? thanx

    Read the article

  • Are .dll files loaded once for every program or once for all programs?

    - by Nilbert
    I have a simple small question which someone who knows will be able to answer easily, I searched google but couldn't find the answer. There are many programs running at once on a computer, and my question is: when a program loads a DLL, does it actually load the DLL file or does it find the memory in which the DLL is already loaded? For example, is ws2_32.dll (winsock 2) loaded for every program that uses winsock, or is it loaded once and all programs that use it use the same memory addresses to call the functions?

    Read the article

  • Database error when deleting entry in my rails app.

    - by Danny McClelland
    Hi Everyone...again! I have almost everything in my Rails app working, with the exception of detroying entries. I can destroy entries for companies but not kases and people. The following error show when trying to do so: SQLite3::SQLException: no such column: kases_people.kase_id: SELECT * FROM "kases" INNER JOIN "kases_people" ON "kases".id = "kases_people".kase_id WHERE ("kases_people".person_id = 5 ) I suspect this is an error with the party model for the has_many :through associations that I dont fully understand. You can find an up to date version of the app at www.github.com/dannyweb/surveycontrol Thanks, Danny

    Read the article

  • deploying vb.net app with database on server

    - by vbNewbie
    I have an application that accesses a sql server 2008 database. The server and database is stored on my local harddrive and I would like to learn to scale this up to having multiple users in our office access the application which I will i deploy on a server. For now the database will stay on my pc until I am ready to put it on a dedicated server for myself but for now how do I allow the application still to access my database. Here is my current connection string: <add key="ConnectionString" value="Data Source=.;Initial Catalog=SearchEngine;User ID=sa;Password=pss;Trusted_Connection=False;" /> Now I know how to deploy a general vb.net application but what I dont know is what to do with the database. Please help with any advice

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • secure data transport between web server and database server

    - by atypicalgeek
    I asked this question in stackoverflow and it was suggested to try here so here goes... I'm planning on provisioning a web server and database server in a server farm environment. They will be in the same network but not in the same domain, both windows server 2008 and the database server is sql server 2008. My question being, what is the best way to secure data in transport between the servers? I've looked into IPSEC and SSL but not sure how to go about implementing either.

    Read the article

  • How efficient is PHP's substr?

    - by zildjohn01
    I'm writing a parser in PHP which must be able to handle large in-memory strings, so this is a somewhat important issue. (ie, please don't "premature optimize" flame me, please) How does the substr function work? Does it make a second copy of the string data in memory, or does it reference the original? Should I worry about calling, for example, $str = substr($str, 1); in a loop?

    Read the article

  • NSData release is not reclaiming memory

    - by ctpenrose
    iPhoneOS 3.2 I use NSKeyedUnarchiver's unarchiveObjectWithFile: to load a custom object that contains a single large NSData and another much smaller object. The dealloc method in my custom object gets called, the NSData object is released, its retainCount == 1 just before. Physical memory does not decrement by any amount, let alone a fraction of the NSData size, and with repetition memory warnings are reliably generated: I have test until I actually received level 2 warnings. =( NSString *archivePath = [[[NSBundle mainBundle] pathForResource:@"lingering"] ofType:@"data"] retain]; lingeringDataContainer = [[NSKeyedUnarchiver unarchiveObjectWithFile:archivePath] retain]; [archivePath release]; [lingeringDataContainer release]; and now the dealloc.... - (void) dealloc { [releasingObject release]; [lingeringData release]; [super dealloc]; } Before release: (gdb) p (int) [(NSData *) lingeringData retainCount] $1 = 1 After: (gdb) p (int) [(NSData *) lingeringData retainCount] Target does not respond to this message selector.

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • Out-Of-Memory while doing Core Data migration

    - by Kamchatka
    Hello, I'm migrating a CoreData model between two versions of an application. I was storing binary data as blobs in the previous version and I want to take them out of the blobs for performance. My issue is that during the migration it seems that Core Data loads everything into memory which leads to Low Memory Warnings and then to my app being killed. Apple documentation suggests the following : http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmCustomizingTheProcess.html#//apple_ref/doc/uid/TP40005510-SW9 However, it seems to rely on the fact that the large objects are applied different mapping. In my case, all the objects are basically the same and the same mapping has to be applied to each of them. I don't see in this case how I could apply their technique. How should I handle a migration with very large objects ?

    Read the article

  • [MFC] Combining 2 memory DCs ?

    - by OverTheEdge
    I'm writing a control where there's a lot of custom drawing going through. Because of this I need to trim down the amount of "screen writes" that go about. Currently there is only one memory DC that is used to write to screen so as to avoid flicker when the control is redrawn. I want to know if it is a possiblity to use 2 or more memory DCs to write updates independently and then bitblt them to screen. This way the need to render non-changed parts of the screen is minimized. thanx in advacne, the_Saint

    Read the article

  • Do i need to insert one fake row in database ?

    - by Ankit Rathod
    Hello, I have few tables like example. Users Books UsersBookPurchase UID BookId UserId UName Name BookId Password Price Email This is fine. I am having my own login system but i am also using some 3rd party to validate like OpenID or facebook Authetication. My question is if the user is able to log in successfully using OpenID or facebook Authentication, what steps do i need to do i.e do i have to insert one fake row in Users table because if i do not insert how will integrity be maintained. I mean what user id should i insert in UsersBookPurchase when the person who has logged in using Facebook Authentication has made a purchase because the UserId is reference key from Users table. Please give me a high level overview of what i need to do because this is fairly common scenario. Thanks in advance :)

    Read the article

  • Database Design: A proper table design for large number of column values.

    - by Jake
    I wish to perform an experiment many different times. After every trial, I am left with a "large" set of output statistics -- let's say, 1000. I would like to store the outputs of my experiments in a table, but what's the best way...? Option 1 Have a table with 1000 columns. Seems like a bad idea. What if the number of statistics one day exceeds the maximum number of columns? Option 2 Have a table with three columns. Let's say, ID, StatisticType, and StatisticValue. That way, you can have as many statistics as you want. However, reading a single experiments statistics becomes more complicated. Moreover, what if different statistics are different data types?? Any suggestions?

    Read the article

  • Invalid Memory Acess for JavaFX ScrollBar on Snow-Leopard

    - by Mike Caron
    I created the following JavaFX script, which when run, generates an Invalid memory access on Snow-Leopard. What is it about javafx.scene.control.ScrollBar that is causing a memory failure? Stage { title: "Scroll View" scene: Scene { content: [ ScrollBar { min: 0 max: 100 value: 0 blockIncrement: 10 vertical: false } ] } resizable: false } I'm using whatever JavaFX (at least 1.2) that comes with NetBeans 6.8: Product Version: NetBeans IDE 6.8 (Build 200912041610) Java: 1.6.0_17; Java HotSpot(TM) 64-Bit Server VM 14.3-b01-101 System: Mac OS X version 10.6.2 running on x86_64; MacRoman; en_US (nb)

    Read the article

  • ADO.NET Multiple simultaneous reads from an open database.

    - by Deverill
    Answer not needed - my logic was wrong and this question is invalid. Charles helped me see where I went off-tracks. Thanks I have a utility that moves data from one source to another. In the process of writing the record I check to see if it exists and do an update/insert as necessary. The difficulty I have is that as I'm writing the main record info there is a 2nd table for "custom data" that I have to check to see if it exists and do an update/insert for that as well. Example: I may be loading a pencil sharpener that may or may not exist. While I'm writing it into destination it has characteristics such as style, color, etc. and each of them may or may not exist. As written I seem to need to have 2 DataReaders open, one for the sharpener and one to check for and update color. I am new to ADO.NET, but not to programming and it's more complicated than I listed but for sanity's sake I didn't put all the details. My question is: What am I missing? You can't have 2 readers open at the same time on a connection, yet I can't close the first if I'm stepping through all products. It seems inefficient to have 2 connections, readers, etc. for this. Is there a feature of ADO.NET DBs that I'm missing? How would you do it? Thanks!

    Read the article

  • Upload database backup from mysql to Amazon S3 or Glacier without creating local file

    - by Rubem Azenha
    Is there a tool that makes possible to backup a Mysql database to Amazon S3 or Amazon Glacier without having o create a local file with the database contents? Something like that: mysqldump -u root -ppass -h host --all-databases | magical-s3-tool s3-bucket backup-yyyy-mm-dd.sql This magical tool would use the pipe data and transfer the backup data directly to S3, without creating a local file.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >