Search Results

Search found 12634 results on 506 pages for 'transactional memory'.

Page 225/506 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • When machine code is generated from a program how does it translates to hardware level operations ??

    - by user553492
    Like if say the instruction is something like 100010101 1010101 01010101 011101010101. Now how is this translating to an actual job of deleting something from memory? Memory consists of actual physical transistors the HOLD data. What causes them to lose that data is some external signal? I want to know how that signal is generated. Like how some binary numbers change the state of a physical transistor. Is there a level beyond machine code that isn't explicitly visible to a programmer? I have heard of microcode that handle code at hardware level, even below assembly language. But still I pretty much don't understand. Thanks!

    Read the article

  • WPF Prism deactivate ?

    - by 2Fast4YouBR
    Hi all, I have an problem and would like to know if it is common problem or jsut with me. I am using Wpf with Prism and Unity, all with the pattern MvvM. I am loading a viewModel that has a reference to a dropdown with few items, my idea is that each time that the user click in some place to open the view that has this dropdown is that the dropdown will be shown with diferent values. The problem is that I see is that after I show the view for the first time and after the first DEACTIVATE, when I try to load it again looks like it is already in the memory (was not deactivated/disposed), so as is already in memory, it not call the constructor again of the modelView and the dropdown is shown with tha same old values. public BranchSelectionViewModel(IUnityContainer unityContainer) { this.unityContainer = unityContainer; User user = this.unityContainer.Resolve<User>(); this.branches = new ObservableCollection<Department>(user.Departments .Where(department => department.DepartmentId != user.SelectedDepartment.DepartmentId)); }

    Read the article

  • iOS Downloading Videos and saving in Application Support folder

    - by Satyam svv
    In my application, i've to download videos around 10 to my application and play accordingly. Each video is around 50 MB. I'm using following code and then after downloading the video, i'm saving it to Application support folder to avoid icloud sync. But the problem is that when downloading the videos its crashing. [NSURLConnection sendAsynchronousRequest:req queue:[[NSOperationQueue alloc] init] completionHandler:^(NSURLResponse *response, NSData *rcvdDat, NSError * err) { . . . } What I'm thinking is that, while downloading the video, it resides in memory and so the total memory occupying by the app is increasing. Finally iOS is making the app to close. I would like to download the video and when ever a stream of data received, write to temp file and when completes move it to application support folder. Can some one help me on how to write it to file and save it at the end? I cannot use 3rd party libraries (unless its small) due to legal issues.

    Read the article

  • how do I download a large file (via HTTP) in .NET

    - by nickcartwright
    I need to download a LARGE file (2GB) over HTTP in a C# console app. Problem is, after about 1.2GB, the app runs out of memory. Here's the code I'm using: WebClient request = new WebClient(); request.Credentials = new NetworkCredential(username, password); byte[] fileData = request.DownloadData(baseURL + fName); As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk. Does anyone know how I could do this?

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

  • 'Caching' a large table in ASP.NET

    - by TheNewGuy
    I understand that each page refresh, especially in 'AjaxLand', causes my back-end/code-behind class to be called from scratch... This is a problem because my class (which is a member object in System.Web.UI.Page) contains A LOT of data that it sources from a database. So now every page refresh in AjaxLand is causing me to making large backend DB calls, rather than just to reuse a class object from memory. Any fix for this? Is this where session variables come into play? Are session variables the only option I have to retain an object in memory that is linked to a single-user and a single-session instance?

    Read the article

  • Doesn't Matlab optimize the following?

    - by kloop
    I have a very long vector 1xr v, and a very long vector w 1xs, and a matrix A rxs, which is sparse (but very big in dimensions). I was expecting the following to be optimized by Matlab so I won't run into trouble with memory: A./(v'*w) but it seems like Matlab is actually trying to generate the full v'*w matrix, because I am running into Out of memory issue. Is there a way to overcome this? Note that there is no need to calculate all v'*w because many values of A are 0. EDIT: If that were possible, one way to do it would be to do A(find(A)) ./ (v'*w)(find(A)); but you can't select a subset of a matrix (v'*w in this case) without first calculating it and putting it in a variable.

    Read the article

  • Read a file to multiple array byte[]

    - by hankol
    I have an encryption algorithm (AES) that accepts file converted to array byte and encrypt it. Since I am going to process a very big size files, the JVM may go out of memory. I am planing to read the files in multiple array byte. each containing some part of the file. Then I teratively feed the algorithm. Finally merge them to produce encrypted file. So my question is: there any way to read a file part by part to multiple array byte? I thought I can use the following to read the file to array byte: IOUtils.toByteArray(InputStream input). And then split the array into multiple bytes using: Arrays.copyOfRange(). But I am afraid that the first code that reads file to byte will make the JVM to go out of memory. any suggestion please ? thanks

    Read the article

  • Shared data in an ASP.NET application

    - by Barguast
    I have a basic ASP.NET application which is used to request data which is stored on disk. This is loaded from files and sent as the response. I want to be able to store the data loaded from these files in memory to reduce the number of reads from disk. All of the requests will be asking for the same data, so it makes sense to have a single cache of in-memory data which is accessible to all requests. What is the best way to create a single accessible object instance which I can use to store and access this cached data? I've looked into HttpApplication, but apparently a new instance of this is created for parallel requests and so it doesn't fit my needs.

    Read the article

  • fastest c++ file compression library available?

    - by Fabien Hure
    I have a need to find the best library to compress in memory data. I am currently using zlib but I am wondering if there is a better compression library; better in terms of performance and memory footprint. It should be able to handle multiple files in the same archive. I am looking for a C/C++ library. Performance is the key factor. The files that are being compressed are small to large XML files.

    Read the article

  • Java: empty ArrayLists in a foor loop

    - by Patrick
    hi, I'm reusing the same ArrayList in a for loop, and I use for loop results = new ArrayList<Integer>(); experts = new ArrayList<Integer>(); output = new ArrayList<String>(); .... to create new ones. I guess this is wrong, because I'm allocating new memory. Is this correct ? If yes, how can I empty them ? Added: another example I'm creating new variables each time I call this method. Is this good practice ? I mean to create new precision, relevantFound.. etc ? Or should I declare them in my class, outside the method to not allocate more and more memory ? public static void computeMAP(ArrayList results, ArrayList experts) { //compute MAP double precision = 0; int relevantFound = 0; double sumprecision = 0; thanks

    Read the article

  • Sequence Number in testing Spring application with JUnit (Hibernating, Spring MVC)

    - by MBK
    I am testing DAO in Spring Application. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:/applicationContext.xml") @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) @Transactional public class CommentDAOImplTest { @Autowired //testing mehods here} The tests are running good. Iam able to add an comment and I also have a defaultRollback property set. So, the added comment will be deleted automatically. happy!..Now the problem is with the sequence number for mcomment. Can I, in any way rollback the seq number? any suggestins on that. I dont want to mess up the sequrnce number. Business requires comment Id to be showed. (I still dont know why). I know in memory db is an option....but I am guessing defaultRollback purpose is to eliminate in memory db testing and mocking. (Just my opinion.)

    Read the article

  • Objective-C retain clarification

    - by Maverick
    I'm looking at this code: NSMutableArray *controllers = [[NSMutableArray alloc] init]; for (unsigned i = 0; i < kNumberOfPages; i++) { [controllers addObject:[NSNull null]]; } self.viewControllers = controllers; [controllers release]; Later on... - (void)dealloc { [viewControllers release]; ... } I see that self.viewControllers and controllers now point to the same allocated memory (of type NSMutableArray *), but when I call [controllers release] isn't self.viewControllers released as well, or is setting self.viewControllers = controllers automatically retains that memory?

    Read the article

  • Tie destruction of an object (sealed) to destruction of an unmanaged buffer

    - by testtestSO
    I'll explain my situation first: I'm interested of using the Bitmap constructor that takes scan0, stride and format, because I'm decoding tiled images and I'd like to choose my own stride so I can decode the tiles without caring about the bounds in the decoder part. Anyway, the problem is that the documentation says: The caller is responsible for allocating and freeing the block of memory specified by the scan0 parameter. However, the memory should not be released until the related Bitmap is released. I can't release the buffer easily, because the Bitmap is then passed to another class that will eventually destroy it and I don't have control over it. Is there some way (hacky, I know) to tell the GC to also release my buffer when the Bitmap is destroyed? (Also, any alternative solution is welcome).

    Read the article

  • Fastest method in merging of the two: dicts vs lists

    - by tipu
    I'm doing some indexing and memory is sufficient but CPU isn't. So I have one huge dictionary and then a smaller dictionary I'm merging into the bigger one: big_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1}} smaller_dict = {"the" : {"6" : 1, "7" : 1}} #after merging resulting_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1, "6" : 1, "7" : 1}} My question is for the values in both dicts, should I use a dict (as displayed above) or list (as displayed below) when my priority is to use as much memory as possible to gain the most out of my CPU? For clarification, using a list would look like: big_dict = {"the" : [1, 2, 3, 4, 5]} smaller_dict = {"the" : [6,7]} #after merging resulting_dict = {"the" : [1, 2, 3, 4, 5, 6, 7]} Side note: The reason I'm using a dict nested into a dict rather than a set nested in a dict is because JSON won't let me do json.dumps because a set isn't key/value pairs, it's (as far as the JSON library is concerned) {"a", "series", "of", "keys"} Also, after choosing between using dict to a list, how would I go about implementing the most efficient, in terms of CPU, method of merging them? I appreciate the help.

    Read the article

  • Difference between local and instance variables in ruby

    - by fflyer05
    I am working on a script that creates several fairly complex nested hash datastructures and then iterates through them conditionally creating database records. This is a standalone script using active record. After several minutes of running I noticed a significant lag in server responsiveness and discovered that the script, while being set to be nice +19, was enjoying a steady %85 - %90 total server memory. In this case I am using instance variables simply for readability. It helps knowing what is going to be re-used outside of the loop vs. what won't. Is there a reason to not use instance variables when they are not needed? Are there differences in memory allocation and management between local and instance variables? Would it help setting @variable = nil when its no longer needed?

    Read the article

  • MySQL TEXT field performance

    - by Jonathon
    I have several TEXT and/or MEDIUMTEXT fields in each of our 1000 MySQL tables. I now know that TEXT fields are written to disk rather than in memory when queried. Is that also true even if that field is not called in the query? For example, if I have a table (tbExam) with 2 fields (id int(11) and comment text) and I run SELECT id FROM tbExam, does MySQL still have to write that to disk before returning results or will it run that query in memory? I am trying to figure out if I need to reconfigure our actual db tables to switch to varchar(xxxx) or keep the text fields and reconfigure the queries.

    Read the article

  • Optimal ASP.Net cache duration for a large site?

    - by HeroicLife
    I've read lots of material on how to do ASP.Net caching but little on the optimal duration that pages should be cached for. Let's say that I have a popular site with 50,000 pages. The content does not change frequently, so I could cache pages for up to an hour if I wanted. The server has 16 GB of RAM, but database connections are limited. How long should pages be cached for? My thinking is that if I set the cache duration too high (let's say 60 minutes), I will fill up memory with a fraction of the total content, which will continually be shuffled in and out of memory. Furthermore, let's say that 10% of the pages are responsible for 90% of traffic. If the popular pages are hit every second, and the unpopular ones every hour, then a 60 second cache would only keep the load-intensive content cached without sacrificing freshness. Should numerous but rarely-accessed content be cached at all?

    Read the article

  • C++: String and unions

    - by sub
    I'm having a (design) problem: I'm building an interpreter and I need some storage for variables. There are basically two types of content a variable here can have: string or int. I'm using a simple class for the variables, all variables are then stored in a vector. However, as a variable can hold a number or a string, I don't want C++ to allocate both and consume memory for no reason. That's why I wanted to use unions: union { string StringValue; int IntValue; } However, strings don't work with unions. Is there any workaround so no memory gets eaten for no reason?

    Read the article

  • Eclipse 3.5 (Cocoa) slowing down irregularly after some time

    - by chris_l
    Hi, I'd like to hear, if anyone else encounters the same problems, and doesn't use Google's GWT (2.0) plugins: Sometimes, my Eclipse 3.5 (Cocoa) slows down after some time of usage (=30 minutes), so that things like maximizing an editor or moving the splitters becomes unbearably slow (reacting only after several seconds). After an Eclipse restart, everything's fine again. I'm not running low on memory (neither free RAM, nor memory available to Eclipse - Heap/Stack/PermGenSpace), and my system specs are not too bad. I know exactly one other person so far, who sees the same problem - but he also uses the GWT plugins. Since these issues appear irregularly, they're hard to track. Before creating an issue on the GWT bug tracker, I'd like to find out, if this also happens for somebody without Google's plugins. Thanks, Chris Edit: I'm running Snow Leopard 10.6.2

    Read the article

  • How to work with images(png's) of size 2-4Mb.

    - by Sam
    I am working with images of size 2 to 4MB. I want to edit image of resolution 1200x1600. performing scaling, traslation and rotation operations. I want to another images on that and saving it to photo album. My app is crashing(giving memory warning) after i successfully edit one image and save to album. I have releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time. Is there any way to compress images and work with it by performing scaling, traslation and rotation operations?

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >