Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 597/670 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • Editing a remote file on-the-fly with PHP

    - by user275074
    Hi, I have a requirement to edit a remote text file on-the-fly, the content of which currently stands at ~1Mb. I have tried a couple of approaches and both seem to be clunky or hog memory which I can't rely on. Thinking out logically what I'm trying to achieve is: FTP to a remote server. Download a copy of the file for backup purposes and store it somewhere locally. Open the remote file and add the necessary lines required. Remove lines from the remote file as per an array of un-required data generated from the local server. Is this possible? I've managed to code steps 1 and 2 but I'm having difficult with 3 and 4. The way I'm doing it now is to use fgets and return the whole string. Really, I don't want to do this as it involves manipulating and re-generating the whole string (and it's large) and then re-inserting it in between two markers in the remote file. Is there no way of manipulating the lines of text in the file on-the-fly?

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • Returning c_str from a function

    - by user199421
    This is from a small library that I found online: const char* GetHandStateBrief(const PostFlopState* state) { static std::ostringstream out; ... rest of the function ... return out.str().c_str() Now in my code I am doing this: const char *d = GetHandStateBrief(&post); std::cout<< d << std::endl; Now, at first d contained garbage. I then realized that the c string I am getting from the function is destroyed when the function returns because std::ostringstream is allocated on the stack. So I added: return strdup( out.str().c_str()); And now I can get the text I need from the function. I have two questions: 1) Am I understanding this correctly? 2) I later noticed that the ostringstream was was allocated with static storage. Doesn't that mean that the object is supposed to stay in memory until the program terminates? and if so , then why can't I access the string?

    Read the article

  • Implementing list position locator in C++?

    - by jfrazier
    I am writing a basic Graph API in C++ (I know libraries already exist, but I am doing it for the practice/experience). The structure is basically that of an adjacency list representation. So there are Vertex objects and Edge objects, and the Graph class contains: list<Vertex *> vertexList list<Edge *> edgeList Each Edge object has two Vertex* members representing its endpoints, and each Vertex object has a list of Edge* members representing the edges incident to the Vertex. All this is quite standard, but here is my problem. I want to be able to implement deletion of Edges and Vertices in constant time, so for example each Vertex object should have a Locator member that points to the position of its Vertex* in the vertexList. The way I first implemented this was by saving a list::iterator, as follows: vertexList.push_back(v); v->locator = --vertexList.end(); Then if I need to delete this vertex later, then rather than searching the whole vertexList for its pointer, I can call: vertexList.erase(v->locator); This works fine at first, but it seems that if enough changes (deletions) are made to the list, the iterators will become out-of-date and I get all sorts of iterator errors at runtime. This seems strange for a linked list, because it doesn't seem like you should ever need to re-allocate the remaining members of the list after deletions, but maybe the STL does this to optimize by keeping memory somewhat contiguous? In any case, I would appreciate it if anyone has any insight as to why this happens. Is there a standard way in C++ to implement a locator that will keep track of an element's position in a list without becoming obsolete? Much thanks, Jeff

    Read the article

  • Which pdf elements could cause crashes?

    - by Felixyz
    This is a very general question but it's based on a specific problem. I've created a pdf reader app for the iPad and it works fine except for certain pdf pages which always crash the app. We now found out that the very same pages cause Safari to crash as well, so as I had started to suspect the problem is somewhere in Apple's pdf rendering code. From what I have been able to see, the crashing pages cause the rendering libraries to start allocating memory like mad until the app is killed. I have nothing else to help me pinpoint what triggers this process. It doesn't necessarily happen with the largest documents, or the ones with the most shapes. In fact, we haven't found any parameter that helps us predict which pages will crash and which not. Now we just discovered that running the pages through a consumer program that lets you merge docs gets rid of the problem, but I haven't been able to detect which attribute or element it is that is the key. Changing documents by hand is also not an option for us in the long run. We need to run an automated process on our server. I'm hoping someone with deeper knowledge about the pdf file format would be able to point me in a reasonable direction to look for document features that could cause this kind of behavior. All I've found so far is something about JBIG2 images, and I don't think we have any of those.

    Read the article

  • After Delete Trigger Fires Only After Delete?

    - by Brandi
    I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation... I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it. By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete. So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone? All the trigger looks like is this: ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger] GO The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.

    Read the article

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • Can't get screen pixel from a specific full screen game with any language?

    - by user1007059
    Okay, I know this might seem like I'm posting a duplicate question since I've asked something similar like a day ago HOWEVER, if anyone sees any problem with this, please read my question first before judging: Yesterday I tried getting a specific pixel from a fullscreen game in C#. I thought my C# code was faulty but when I tried with multiple full screen games today they all worked except for that specific game. I literally tried with 10 different full screen games, a couple being mmofps, mmorpg, mmotps, regular rpg games, regular shooters, regular action adventure games, etc. I tried with multiple programming languages, and with every game except that specific game I'm dealing with, it returns the pixel color to me like I wanted. So let me explain what I tried: first I tried returning an IntPtr with C# using GetDC(IntPtr.Zero) before invoking GetPixel(int x, int y) and then getting the color out of it. Then I tried using the Robot class in Java and using the getPixelColor(int x, int y) method. I also tried using GetDC(0) to return an HDC object in C++ and then invoking GetPixel(int x, int y) before again extracting the color. These three methods worked EXACTLY the same in every single game except that specific game I was talking about. They returned the pixel perfectly, and extracted the exact same color perfectly. I don't feel it's necessary to tell you the game name or anything, since you probably don't even know it, but what could possibly be causing this malfunction in 1 specific game? PS: The game ALWAYS returns an RGB color of: A = 255, R = 0, G = 0, B = 0. Also, I tried taking a snapshot of the game with the 3 programming languages, and then getting the pixel which actually works in all 3 languages, but since I need to get this pixel every 30 ms, it kind of makes my game lag a bit (+ I think it takes up a lot of memory)

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • error: invalid type argument of '->' (have 'struct node')

    - by Roshan S.A
    Why cant i access the pointer "Cells" like an array ? i have allocated the appropriate memory why wont it act like an array here? it works like an array for a pointer of basic data types. #include<stdio.h> #include<stdlib.h> #include<ctype.h> #define MAX 10 struct node { int e; struct node *next; }; typedef struct node *List; typedef struct node *Position; struct Hashtable { int Tablesize; List Cells; }; typedef struct Hashtable *HashT; HashT Initialize(int SIZE,HashT H) { int i; H=(HashT)malloc(sizeof(struct Hashtable)); if(H!=NULL) { H->Tablesize=SIZE; printf("\n\t%d",H->Tablesize); H->Cells=(List)malloc(sizeof(struct node)* H->Tablesize); should it not act like an array from here on? if(H->Cells!=NULL) { for(i=0;i<H->Tablesize;i++) the following lines are the ones that throw the error { H->Cells[i]->next=NULL; H->Cells[i]->e=i; printf("\n %d",H->Cells[i]->e); } } } else printf("\nError!Out of Space"); } int main() { HashT H; H=Initialize(10,H); return 0; } The error I get is as in the title-error: invalid type argument of '->' (have 'struct node').

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • A generic error occurred in GDI+, JPEG Image to MemoryStream

    - by madcapnmckay
    Hi, This seems to be a bit of an infamous error all over the web. So much so that I have been unable to find an answer to my problem as my scenario doesn't fit. An exception gets thrown when I save the image to the stream. Weirdly this works perfectly with a png but gives the above error with jpg and gif which is rather confusing. Most similar problem out there relate to saving images to files without permissions. Ironically the solution is to use a memory stream as I am doing.... public static byte[] ConvertImageToByteArray(Image imageToConvert) { using (var ms = new MemoryStream()) { ImageFormat format; switch (imageToConvert.MimeType()) { case "image/png": format = ImageFormat.Png; break; case "image/gif": format = ImageFormat.Gif; break; default: format = ImageFormat.Jpeg; break; } imageToConvert.Save(ms, format); return ms.ToArray(); } } More detail to the exception. The reason this causes so many issues is the lack of explanation :( System.Runtime.InteropServices.ExternalException was unhandled by user code Message="A generic error occurred in GDI+." Source="System.Drawing" ErrorCode=-2147467259 StackTrace: at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) at Caldoo.Infrastructure.PhotoEditor.ConvertImageToByteArray(Image imageToConvert) in C:\Users\Ian\SVN\Caldoo\Caldoo.Coordinator\PhotoEditor.cs:line 139 at Caldoo.Web.Controllers.PictureController.Croppable() in C:\Users\Ian\SVN\Caldoo\Caldoo.Web\Controllers\PictureController.cs:line 132 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: OK things I have tried so far. Cloning the image and working on that. Retrieving the encoder for that MIME passing that with jpeg quality setting. Please can anyone help.

    Read the article

  • What should be taught in a "Fundamentals of programming" course at university?

    - by Dervin Thunk
    I have started a new question (see here), because I think the topic is of importance in a more general form. The question is now: If you were a professor at a Computer Science Dept. in some university, what would make it into your course? This is a programming course, second term, first year computer science/computer engineering. Remember you have a limited amount of time, and students are of different levels of competence, and some may be scientists, but some will also go on to be programmers in companies of different kinds. You have to cater to all. Bonus: What language? (Although see this question for my current thoughts about this...) Maybe you want to attach a course outline from some university? See here for an even more general question about this. Answer: I can't really summarize this post... I guess it was too subjective. However, it looks like we have to cover the history of computing up to a certain extent, computer architecture (memory, registers, whatever), C, and finally some basic algos and data structures in a problem solving fashion. This will be the bare bones of the course. Thanks all. I will accept the most voted up answer to close the thread, as it should be done.

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • Java: autofiltering list?

    - by Jason S
    I have a series of items arriving which are used in one of my data structures, and I need a way to keep track of those items that are retained. interface Item {} class Foo implements Item { ... } class Baz implements Item { ... } class StateManager { List<Foo> fooList; Map<Integer, Baz> bazMap; public List<Item> getItems(); } What I want is that if I do the following: for (int i = 0; i < SOME_LARGE_NUMBER; ++i) { /* randomly do one of the following: * 1) put a new Foo somewhere in the fooList * 2) delete one or more members from the fooList * 3) put a new Baz somewhere in the bazMap * 4) delete one or more members from the bazMap */ } then if I make a call to StateManager.getItems(), I want to return a list of those Foo and Baz items, which are found in the fooList and the bazMap, in the order they were added. Items that were deleted or displaced from fooList and bazMap should not be in the returned list. How could I implement this? SOME_LARGE_NUMBER is large enough that I don't have the memory available to retain all the Foo and Baz items, and then filter them.

    Read the article

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • Pump Messages During Long Operations + C#

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? Thanks

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Creating subtree from tree which is represented in xml - python

    - by Jay
    Hi I have an XML (in the form of tree), I require to create sub-tree out of it. For ex: <a> <b> <c>Hello</c> <d> <e>Hi</e> </a> Subtree would be <root> <a> <b> <c>Hello</c> </b> </a> <a> <d> <e>Hi</e> </d> </a> </root> What is the best XML library in python to do it? Any algorithm that already does this would also be helpful. Note: the XML doc won't be that big, it will easily fit in memory.

    Read the article

  • cell and array in Matlab

    - by Tim
    Hi, I am a little confused about the usage of cell and array in Matlab. I would like to hear about your understandings. Here are my observations: (1). array can dynamically adjust its own memory to allow dynamic number of elements, while cell seems not act in the same way. a=[]; a=[a 1]; b={}; b={b 1}; (2). several elements can be retrieved from cell, while they seem not from array. a={'1' '2'}; figure, plot(...); hold on; plot(...) ; legend(a{1:2}); b=['1' '2']; figure, plot(...); hold on; plot(...) ; legend(b(1:2)); % b(1:2) is an array, not its elements, so it is wrong with legend. Are these correct? What are some other different usages between the cell and array? Thanks and regards!

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >