Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 752/837 | < Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >

  • call addsubview again causes slowdown

    - by Tom
    hi guys, i am writing a little music-game for the iphone. I am almost done, this is the only issue which keeps me from rolling it out. any help to solve this is much appreciated. this is what i do: at my appDelegate I add my menu-view-screen to the window. the menu-view-screen acts as a container and controls which view gets presented to the user. means, on the menu-view-screen I got 4 buttons (new game, options, faq, highscore). when the user clicks on a button something as this happens: if (self.gameViewController == nil) { GameViewController *viewController = [[GameViewController alloc] initWithNibName:@"GameViewController" bundle:nil]; self.gameViewController = viewController; [viewController release]; } [self.view addSubview:self.gameViewController.view]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleSwitchViewNotificationFromGameView:) name:@"SwitchView" object:gameViewController]; when the user returns to the menu, this piece of code gets executed: [[NSNotificationCenter defaultCenter] removeObserver:self]; [self.gameViewController viewWillDisappear:YES]; [self.gameViewController.view removeFromSuperview]; this works fine for all screens but not for the gamescreen(well this is the only one with heaps of user-interaction) means the responsiveness of the iphone(when playing tones) gets really slow. The performance is fine when I display the gameview for the first time. it starts getting slower as soon as I add it to the menu-views-container-subviews again (addsubview) (basically open up a new game) any ideas what causes(or to get around) this? thanks heaps Best regards Tom

    Read the article

  • .net IHTTPHandler Streaming SQL Binary Data

    - by Yisman
    Hello everybody I am trying to implement an ihttphandeler for streaming files. files may be tiny thumbnails or gigantic movies the binaries r stored in sql server i looked at a lot of code online but something does not make sense isnt streaming supposed to read the data piece by piece and move it over the line? most of the code seems to first read the whole field from mssql to memory and then use streaming for the output writing wouldnt it b more eficient to actually stream from disk directly to http byte by byte (or buffered chunks?) heres my code so far but cant figure out the correct combination of the sqlreader mode and the stream object and the writing system Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.BufferOutput = False Dim FileField=safeparam(context.Request.QueryString("FileField")) Dim FileTable=safeparam(context.Request.QueryString("FileTable")) Dim KeyField=safeparam(context.Request.QueryString("KeyField")) Dim FileKey=safeparam(context.Request.QueryString("FileKey")) Using connection As New SqlConnection(ConfigurationManager.ConnectionStrings("Main").ConnectionString) Using command As New SqlCommand("SELECT " & FileField & "Bytes," & FileField & "Type FROM " & FileTable & " WHERE " & KeyField & "=" & FileKey, connection) command.CommandType = Data.CommandType.Text enbd using end using end sub please be aware that this sql command also returns the file extension (pdf,jpg,doc...) in the second field of the query thank you all very much

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • Minimal deployment of couchdb on windows

    - by MartinStettner
    Hi, I'd like to use couchdb for a client-only application on Windows (the document-oriented structure and the synchronization features would be perfect for me). There is a Windows installer package here, but the installer itself has about 45 MB, when installed it takes more than 100 MB on my HD. This is far to much for my (relatively small) application. I noticed that there are a lot of "src" directories in the couchdb/lib subdirs. I've been experimenting with removing some of them and it didn't seem to break the system. Now I'm wondering what would be the "minimal" set of files (preferably binary-only) that would be needed in order to run a local couchdb server. Are there already any efforts to create such a deployment-friendly installer? Or could anyone give some (even very general) hints how to create it? How much disk space would be minimally required for such an installation? Needless to say that I'm not at all familiar with neither the couchdb internals nor the Erlang system :). But perhaps I could figure out if I got some direction (or I could stop trying if someone told me that this would be impossible or didn't make sense at all ...) Thanks anyway!

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • Combining the streams:Web application

    - by Surendra J
    This question deals mainly with streams in web application in .net. In my webapplication I will display as follows: bottle.doc sheet.xls presentation.ppt stackof.jpg Button I will keep checkbox for each one to select. Suppose a user selected the four files and clicked the button,which I kept under. Then I instantiate clasees for each type of file to convert into pdf, which I wrote already and converted them into pdf and return them. My problem is the clases is able to read the data form URL and convert them into pdf. But I don't know how to return the streams and merge them. string url = @"url"; //Prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "GET"; request.ContentType = "application/mspowerpoint"; request.UserAgent = "Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT+5.0"; //Execute the request HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //We will read data via the response stream Stream resStream = response.GetResponseStream(); //Write content into the MemoryStream BinaryReader resReader = new BinaryReader(resStream); MemoryStream PresentaionStream = new MemoryStream(resReader.ReadBytes((int)response.ContentLength)); //convert the presention stream into pdf and save it to local disk. But I would like to return the stream again. How can I achieve this any Ideas are welcome.

    Read the article

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • How can I see which shared folders my program has access to?

    - by Kasper Hansen
    My program needs to read and write to folders on other machines that might be in another domain. So I used the System.Runtime.InteropServices to add shared folders. This worked fine when it was hard coded in the main menu of my windows service. But since then something went wrong and I don't know if it is a coding error or configuration error. What is the scope of a shared folder? If a thread in my program adds a shared folder, can the entire local machine see it? Is there a way to view what shared folders has been added? Or is there a way to see when a folder is added? [DllImport("NetApi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] internal static extern System.UInt32 NetUseAdd(string UncServerName, int Level, ref USE_INFO_2 Buf, out uint ParmError); [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal struct USE_INFO_2 { internal LPWSTR ui2_local; internal LPWSTR ui2_remote; internal LPWSTR ui2_password; internal DWORD ui2_status; internal DWORD ui2_asg_type; internal DWORD ui2_refcount; internal DWORD ui2_usecount; internal LPWSTR ui2_username; internal LPWSTR ui2_domainname; } private void AddSharedFolder(string name, string domain, string username, string password) { if (name == null || domain == null || username == null || password == null) return; USE_INFO_2 useInfo = new USE_INFO_2(); useInfo.ui2_remote = name; useInfo.ui2_password = password; useInfo.ui2_asg_type = 0; //disk drive useInfo.ui2_usecount = 1; useInfo.ui2_username = username; useInfo.ui2_domainname = domain; uint paramErrorIndex; uint returnCode = NetUseAdd(String.Empty, 2, ref useInfo, out paramErrorIndex); if (returnCode != 0) { throw new Win32Exception((int)returnCode); } }

    Read the article

  • What is the the relation between programming and mathematics?

    - by Math Grad
    Programmers seem to think that their work is quite mathematical. I understand this when you try to optimize something in performance, find the most efficient alogithm, etc.. But it patently seems false when you look at a billing application for a shop, or a systems software riddled with I/O calls. So what is it exactly? Is computation and associated programming really mathematical? Here I have in mind particularly the words of the philosopher Schopenhauer in mind: That arithmetic is the basest of all mental activities is proved by the fact that it is the only one that can be accomplished by means of a machine. Take, for instance, the reckoning machines that are so commonly used in England at the present time, and solely for the sake of convenience. But all analysis finitorum et infinitorum is fundamentally based on calculation. Therefore we may gauge the “profound sense of the mathematician,” of whom Lichtenberg has made fun, in that he says: “These so-called professors of mathematics have taken advantage of the ingenuousness of other people, have attained the credit of possessing profound sense, which strongly resembles the theologians’ profound sense of their own holiness.” I lifted the above quote from here. It seems that programmers are doing precisely the sort of mechanized base mental activity the grand old man is contemptuous about. So what exactly is the deal? Is programming really the "good" kind of mathematics, or just the baser type, or altogether something else just meant for business not to be confused with a pure discipline?

    Read the article

  • Output reformatted text within a file included in a JSP

    - by javanix
    I have a few HTML files that I'd like to include via tags in my webapp. Within some of the files, I have pseudo-dynamic code - specially formatted bits of text that, at runtime, I'd like to be resolved to their respective bits of data in a MySQL table. For instance, the HTML file might include a line that says: Welcome, [username]. I want this resolved to (via a logged-in user's data): Welcome, [email protected]. This would be simple to do in a JSP file, but requirements dictate that the files will be created by people who know basic HTML, but not JSP. Simple text-tags like this should be easy enough for me to explain to them, however. I have the code set up to do resolutions like that for strings, but can anyone think of a way to do it across files? I don't actually need to modify the file on disk - just load the content, modify it, and output it w/in the containing JSP file. I've been playing around with trying to load the files into strings via the apache readFileToString, but I can't figure out how to load files from a specific folder within the webapp's content directory without hardcoding it in and having to worry about it breaking if I deploy to a different system in the future.

    Read the article

  • How to perform a binary search on IList<T>?

    - by Daniel Brückner
    Simple question - given an IList<T> how do you perform a binary search without writing the method yourself and without copying the data to a type with build-in binary search support. My current status is the following. List<T>.BinarySearch() is not a member of IList<T> There is no equivalent of the ArrayList.Adapter() method for List<T> IList<T> does not inherit from IList, hence using ArrayList.Adapter() is not possible I tend to believe that is not possible with build-in methods, but I cannot believe that such a basic method is missing from the BCL/FCL. If it is not possible, who can give the shortest, fastest, smartest, or most beatiful binary search implementation for IList<T>? UPDATE We all know that a list must be sorted before using binary search, hence you can assume that it is. But I assume (but did not verify) it is the same problem with sort - how do you sort IList<T>? CONCLUSION There seems to be no build-in binary search for IList<T>. One can use First() and OrderBy() LINQ methods to search and sort, but it will likly have a performance hit. Implementing it yourself (as an extension method) seems the best you can do.

    Read the article

  • How can we serialize a class that is not a custom class of our own?

    - by Doug
    I need to look at the properties of an object and I cannot instantiate this object in the proper state on my dev machine. I need my client to run some code on her machine, serialize the object in question to disk and then I can analyze the file. Here is the class I want to serialize. System.Security.AccessControl.RegistrySecurity Here is my code: Private Sub SerializeRegSecurity(ByVal regKey As RegistryKey) Try Dim regSecurity As System.Security.AccessControl.RegistrySecurity = regKey.GetAccessControl() Dim oXS As XmlSerializer = New XmlSerializer(GetType(System.Security.AccessControl.RegistrySecurity)) Dim oStmW As StreamWriter Dim regDebugFilePath As String = Path.Combine(My.Computer.FileSystem.SpecialDirectories.Desktop, "RegDebugFile.xml") 'Serialize object to XML and write it to XML file oStmW = New StreamWriter(regDebugFilePath) oXS.Serialize(oStmW, regSecurity) oStmW.Close() Catch ex As Exception Console.WriteLine(ex.ToString) End Try End Sub And here's what I end up with in my XML file: <?xml version="1.0" encoding="utf-8"?> Any ideas on how to accomplish what I am trying to do? How can we serialize a class that is not a custom class of our own? Thanks for ANY help. Even an alternate method.

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • Button Onclick event (which is in codbehind) doesn't get triggered in MVC 2

    - by rksprst
    I had an MVC 1.0 web application that was in VS 2008; I just upgraded the project to VS 2010 which automatically upgraded MVC to 2.0. I have a bunch of viewpages have codebehind files that were manually added. The project worked fine before the upgrade, but now the onclick even't don't get triggered. I.e. I have an asp:button with an onclick event that points to a method in the codebehind. When you click the button, the onclick event doesn't get triggered. In fact, when you look at the Page variable, IsPostBack is false. This is really bizarre and I'm wondering if anyone know what happened and how to fix it. I'm thinking it has something to do with the changes in MVC 2.0; but I'm not sure. Any help is really appreciated, I've been trying to figure this out for a while. (deleting the codebehinds and moving that to the controller is not really an option since there is so many pages, moving back to vs 2008 is a last resort as I want to make use of some of the VS 2010 features like performance testing.)

    Read the article

  • passing groups of properties from 1 class to another

    - by insanepaul
    I have a group of 13 properties in a class. I created a struct for these properties and passed it to another class. I need to add another 10 groups of these 13 properties. So thats 130 properties in total. What do I do? I could add all 130 properties to the struct. Will this affect performance and readability I could create a list of structs but don't know how to access an item eg. to add to the list: listRowItems.Add(new RowItems(){a=1, b=1, c=1, d=1...}); listRowItems.Add(new RowItems(){a=2, b=2, c=2, d=2...}); How do I access the second group item b? is it Could I use just a dictionary with 130 items Should I use a list of dictionaries (again I don't know how to access a particular item) Should I pass in a class of 130 properties Just for your interest the properties are css parameters used for a composite control. The control displays 13 elements in each row and there are 10 rows and each row is customisable.

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • lost session after redirect_to

    - by PeterWong
    I encountered a strange performance in my current project, which is about session. The strange part is it was normal in Safari but failed in other browsers (includes chrome, firefox and opera). There is a registration form for input of part of the key information (email, password, etc) and is submitted to an action called "create" This is the basic code of create action: @account = Account.new(params[:account]) if @account.save ApplicationController.current_account = @account session[:current_account] = ApplicationController.current_account session[:account] = ApplicationController.current_account.id email = @account.email Mailer.deliver_account_confirmation(email) flash[:type] = "success" flash[:notice] = "Successfully Created Account" redirect_to :controller => "accounts", :action => "create_step_2" else flash[:type] = "error" flash[:title] = "Oops, something wasn't right." flash[:notice] = "Mistakes are marked below in red. Please fix them and resubmit the form. Thanks." render :action => "new" end Also I created a before_filter in the application controller, which has the following code: ApplicationController.current_account = Account.find_by_id(session[:current_account].id) unless session[:current_account].blank? For Safari, there is no any problem. But for the other browsers, the session[:current_account] does not exist and so produced the following error message: RuntimeError in AccountsController#create_step_2 Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id Please could anyone help me?

    Read the article

  • Caching issue with javascript and asp.net

    - by Ed Woodcock
    Hi guys: I asked a question a while back on here regarding caching data for a calendar/scheduling web app, and got some good responses. However, I have now decided to change my approach and stat caching the data in javascript. I am directly caching the HTML for each day's column in the calendar grid inside the $('body').data() object, which gives very fast page load times (almost unnoticable). However, problems start to arise when the user requests data that is not yet in the cache. This data is created by the server using an ajax call, so it's asynchronous, and takes about 0.2s per week's data. My current approach is simply to block for 0.5s when the user requests information from the server, and cache 4 weeks either side in the inital page load (and 1 extra week per page change request), however I doubt this is the optimal method. Does anyone have a suggestion as to how to improve the situation? To summarise: Each week takes 0.2s to retrieve from the server, asynchronously. Performance must be as close to real-time as possible. (however the data is not needed to be fully real-time: most appointments are added by the user and so we can re-cache after this) Currently 4 weeks are cached on either side of the inial week loaded: this is not enough. to cache 1 year takes ~ 21s, this is too slow for an initial load.

    Read the article

< Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >