Search Results

Search found 2283 results on 92 pages for 'resume improvement'.

Page 75/92 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • Are Thread.stop and friends ever safe in Java?

    - by Stephen C
    The stop(), suspend(), and resume() in java.lang.Thread are deprecated because they are unsafe. The Sun recommended work around is to use Thread.interrupt(), but that approach doesn't work in all cases. For example, if you are call a library method that doesn't explicitly or implicitly check the interrupted flag, you have no choice but to wait for the call to finish. So, I'm wondering if it is possible to characterize situations where it is (provably) safe to call stop() on a Thread. For example, would it be safe to stop() a thread that did nothing but call find(...) or match(...) on a java.util.regex.Matcher? (If there are any Sun engineers reading this ... a definitive answer would be really appreciated.) EDIT: Answers that simply restate the mantra that you should not call stop() because it is deprecated, unsafe, whatever are missing the point of this question. I know that that it is genuinely unsafe in the majority of cases, and that if there is a viable alternative you should always use that instead. This question is about the subset cases where it is safe. Specifically, what is that subset?

    Read the article

  • Reusing named_scope to define another named_scope

    - by Sergei Kozlov
    The problem essence as I see it One day, if I'm not mistaken, I have seen an example of reusing a named_scope to define another named_scope. Something like this (can't remember the exact syntax, but that's exactly my question): named_scope :billable, :conditions => ... named_scope :billable_by_tom, :conditions => { :billable => true, :user => User.find_by_name('Tom') } The question is: what is the exact syntax, if it's possible at all? I can't find it back, and Google was of no help either. Some explanations Why I actually want it, is that I'm using Searchlogic to define a complex search, which can result in an expression like this: Card.user_group_managers_salary_greater_than(100) But it's too long to be put everywhere. Because, as far as I know, Searchlogic simply defines named_scopes on the fly, I would like to set a named_scope on the Card class like this: named_scope from_big_guys, { user_group_managers_salary_greater_than(100) } - this is where I would use that long Searchlogic method inside my named_scope. But, again, what would be the syntax? Can't figure it out. Resume So, is named_scope nesting (and I do not mean chaining) actually possible?

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • how to delete multiple folders,desktop and start menu shortcut using vbscript

    - by user1756858
    I never did any vbscript before, so i don't know if my question is very easy one. Following is the flow of steps that has to be done : Check if exist and delete a folder at c:\test1 if found and continue. If not found continue. Check if exist and delete a folder at c:\programfiles\test2 if found and continue. If not found continue. Check if a desktop shortcut and start menu shortcut exist and delete if found. If not exit. I could delete 2 folders with the following code: strPath1 = "C:\test1" strPath1 = "C:\test1" DeleteFolder strPath1 DeleteFolder strPath1 Function DeleteFolder(strFolderPath1) Dim objFSO, objFolder Set objFSO = CreateObject ("Scripting.FileSystemObject") If objFSO.FolderExists(strFolderPath) Then objFSO.DeleteFolder strFolderPath, True End If Set objFSO = Nothing But i need to run one script to delete 2 folders in different paths, 2 shortcuts one in start menu and one on desktop. I was experimenting with this code to delete the shortcut on my desktop: Dim WSHShell, DesktopPath Set WSHShell = WScript.CreateObject("WScript.Shell") DesktopPath = WSHShell.SpecialFolders("Desktop") on error resume next Icon = DesktopPath & "\sample.txt" Set fs = CreateObject("Scripting.FileSystemObject") Set A = fs.GetFile(Icon) A.Delete WScript.Quit It works fine for txt file on desktop, but how do i delete a shortcut for an application from desktop as well as start menu.

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • Is it safe to use a boolean flag to stop a thread from running in C#

    - by Lirik
    My main concern is with the boolean flag... is it safe to use it without any synchronization? I've read in several places that it's atomic. class MyTask { private ManualResetEvent startSignal; private CountDownLatch latch; private bool running; MyTask(CountDownLatch latch) { running = false; this.latch = latch; startSignal = new ManualResetEvent(false); } // A method which runs in a thread public void Run() { startSignal.WaitOne(); while(running) { startSignal.WaitOne(); //... some code } latch.Signal(); } public void Stop() { running = false; startSignal.Set(); } public void Start() { running = true; startSignal.Set(); } public void Pause() { startSignal.Reset(); } public void Resume() { startSignal.Set(); } } Is this a safe way to design a task? Any suggestions, improvements, comments? Note: I wrote my custom CountDownLatch class in case you're wondering where I'm getting it from.

    Read the article

  • $_FILES is null, $_POST is not null

    - by Cory Dee
    When I am going to upload a file, my $_POST variable knows the file name, but the $_FILES variable is null. I've used this code before, so I'm really stumped. Here's what I'm using for input: <label for="importFile">Attach Resume:</label> <input type="hidden" name="MAX_FILE_SIZE" value="10000000"> <input type="file" name="importFile" id="importFile" class="validate['required']"> And for processing: $uploaddir = "E:/Sites/OPL/2008/assets/apps/newjobs/resumes/"; $uploadfile = $uploaddir . time() . '-' . urlencode(basename($_FILES['importFile']['name'])); if (!move_uploaded_file($_FILES['importFile']['tmp_name'], $uploadfile)) { echo 'Error uploading file. Error number: ' . $_FILES['importFile']['error']; var_dump($_FILES['importFile']); echo $_POST['importFile']; die(); } Which is giving me this result: Error uploading file. Error number: NULL Maintaining The OPL Website.doc Any help would be greatly appreciated.

    Read the article

  • Setup for games animation: How do I know JFrame is finished setting itself up?

    - by Jokkel
    I'm using javax.swing.JFrame to draw game animations using double buffer strategy. First, I set up the frame. JFrame frame = new JFrame(); frame.setVisible(true); Now, I draw an object (let it be a circle, doesn't matter) like this. frame.createBufferStrategy(2); bufferStrategy = frame.getBufferStrategy(); Graphics g = bufferStrategy.getDrawGraphics(); circle.draw(g); bufferStrategy.show(); The problem is that the frame is not always fully set-up when the drawing takes place. Seems like JFrame needs up to three steps in resizing itself, until it reaches it's final size. That makes the drawing slide out of frame or hinders it to appear completely from time to time. I already managed to delay things using SwingUtilities.invokeLater(). While this improved the result, there are still times when the drawing slides away / looks prematurely draw. Any idea / strategy? Thanks in advance. EDIT: Ok thanks so far. I didn't mention that I write a little Pong game in the first place. Sorry for the confusion What I actually looked for was the right setup for accelerated game animations done in Java. While reading through the suggestions I found my question answered (though indirectly) here and this example made things clear for me. A resume for this might be that for animating game graphics in Java, the first step is to get rid of the GUI logic overhead.

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Skip subdirectory in python import

    - by jstaab
    Ok, so I'm trying to change this: app/ - lib.py - models.py - blah.py Into this: app/ - __init__.py - lib.py - models/ - __init__.py - user.py - account.py - banana.py - blah.py And still be able to import my models using from app.models import User rather than having to change it to from app.models.user import User all over the place. Basically, I want everything to treat the package as a single module, but be able to navigate the code in separate files for development ease. The reason I can't do something like add for file in __all__: from file import * into init.py is I have circular references between the model files. A fix I don't want is to import those models from within the functions that use them. But that's super ugly. Let me give you an example: user.py ... from app.models import Banana ... banana.py ... from app.models import User ... I wrote a quick pre-processing script that grabs all the files, re-writes them to put imports at the top, and puts it into models.py, but that's hardly an improvement, since now my stack traces don't show the line number I actually need to change. Any ideas? I always though init was probably magical but now that I dig into it, I can't find anything that lets me provide myself this really simple convenience.

    Read the article

  • Questions about HTML5 audio

    - by Nimbuz
    <audio src="http://upload.wikimedia.org/wikipedia/commons/8/82/Riddle_song.ogg"></audio> <ul id="lyrics"> <li>line 1</li> <li>line 2</li> <li>line 3</li> <li>and so on...</li> </ul><!-- end #lyrics --> So I want to: Highlight (change color or background) of the line that is being played. Save current time to a cookie and resume on next visit. I'm not sure if either of these are possible in HTML5, but even in Flash or other technology, I'd like to know if and how it is possible. I understand #2 is asking too much, but #1 is really important. So almost similar to this: http://randallagordon.com/jaraoke/ but all the lines are visible, just the current line is highlighted. Many thanks for your help.

    Read the article

  • [iPhone] Play video not in full screen mode

    - by Kyo
    Hello, First, excuse my english :) I've read on apple developer website that video playback provides by the framework supports only full screen mode. I will need to develop an application where video can be played in reduce screen mode. I've see that Orange TV make something which looks like what i need to do. http://img218.imageshack.us/img218/1228/tvplayerorange.jpg The application is available on app store but you need to have a subscription to test this application. Whatever, to resume it, we can see video (tv stream video) in a reduce mode and if we click on the screen it switch to a full screen mode. So my question, what i want to do is possible (Orange TV made it) but i wonder the difficulty to make it. It seems that I have to make a video player. If it take a bunch of time, I tkink I will use Media Player Framework of iPhone even isn't the optimal solution for me. Feel free to ask me more details ;) Thank you for your answers.

    Read the article

  • how can I store the current status of the game in cocos2d ?

    - by srikanth rongali
    I am writing a shooting game in cocos2d. And each enemy enters the screen after the current one is dead. I have stores the enemies and their properties in plist. I need to save the current state of the game. If any phone call comes the game should be started from the current state. So, I usedNsUserDefaults in this way, - (void) applicationDidFinishLaunching:(UIApplication*)application { ... NSUserDefaults *myDefaultOptions = [[myDefaultOptions stringForKey:@"enemyNumber"]intValue] ; //tempCount4 is the current Enemy number. It was declared in another class. I am using extern and using the value here. tempCount4 = [[myDefaultOptions stringForKey:@"enemyNumber"]intValue] ; } - (void)applicationWillTerminate:(UIApplication *)application { [[CCDirector sharedDirector] end]; [myDefaultOptions setObject:tempCount4 forKey:@"enemyNumber"]; } The control is not entering in to the (void)applicationWillTerminate:(UIApplication *)application when I pressed the Home button. And when I touched the game icon on the screen the game is running from first screen and in log (terminal )it is not showing any values. And what should I store to resume my game from stored state. Can you explain where I was wrong ? Thank You.

    Read the article

  • Looking for an appropriate design pattern

    - by user1066015
    I have a game that tracks user stats after every match, such as how far they travelled, how many times they attacked, how far they fell, etc, and my current implementations looks somewhat as follows (simplified version): Class Player{ int id; public Player(){ int id = Math.random()*100000; PlayerData.players.put(id,new PlayerData()); } public void jump(){ //Logic to make the user jump //... //call the playerManager PlayerManager.jump(this); } public void attack(Player target){ //logic to attack the player //... //call the player manager PlayerManager.attack(this,target); } } Class PlayerData{ public static HashMap<int, PlayerData> players = new HashMap<int,PlayerData>(); int id; int timesJumped; int timesAttacked; } public void incrementJumped(){ timesJumped++; } public void incrementAttacked(){ timesAttacked++; } } Class PlayerManager{ public static void jump(Player player){ players.get(player.getId()).incrementJumped(); } public void incrementAttacked(Player player, Player target){ players.get(player.getId()).incrementAttacked(); } } So I have a PlayerData class which holds all of the statistics, and brings it out of the player class because it isn't part of the player logic. Then I have PlayerManager, which would be on the server, and that controls the interactions between players (a lot of the logic that does that is excluded so I could keep this simple). I put the calls to the PlayerData class in the Manager class because sometimes you have to do certain checks between players, for instance if the attack actually hits, then you increment "attackHits". The main problem (in my opinion, correct me if I'm wrong) is that this is not very extensible. I will have to touch the PlayerData class if I want to keep track of a new stat, by adding methods and fields, and then I have to potentially add more methods to my PlayerManager, so it isn't very modulized. If there is an improvement to this that you would recommend, I would be very appreciative. Thanks.

    Read the article

  • Detecting TCP dropout over an unreliable network

    - by yx
    I am doing some experimentation over an unreliable radio network (home brewed) using very rudimentary java socket programming to transfer messages back and forth between the end nodes. The setup is as follows: Node A --- Relay Node --- Node B One problem I am constantly running into is that somehow the connection drops out and neither Node A or B knows that the link is dead, and yet continues to transmit data. The TCP connection does not time out either. I have added in a heartbeat message that causes a timeout after a while, but I still would like to know what is the underlying cause of why TCP does not time out. Here are the options I am enabling when setting up a socket: channel.socket().setKeepAlive(false); channel.socket().setTrafficClass(0x08); // for max throughput This behavior is strange since it is totally different than when I have a wired network. On a wired network, I can simulate a disconnected connection by pulling out the ethernet cord, however, once I plug the cord back in, the connection becomes restablished and messages begin to be passed through once more. On the radio network, the connection is never reestablished and once it silently dies, the messages never resume. Is there some other unknown java implentation or setting for a socket that I can use, also, why am I seeing this behavior in the first place? And yes, before anyone says anything, I know TCP is not the preffered choice over an unreliable network, but in this case I wanted to ensure no packet loss.

    Read the article

  • Thread-Safe lazy instantiating using MEF

    - by Xaqron
    // Member Variable private static readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { // It seems the `IsValueCreated` is not up-to-date if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • Please tell me what is wrong with my threading!!!

    - by kiddo
    I have a function where I will compress a bunch of files into a single compressed file..it is taking a long time(to compress),so I tried implementing threading in my application..Say if I have 20 files for compression,I separated that as 5*4=20,inorder to do that I have separate variables(which are used for compression) for all 4 threads in order to avoid locks and I will wait until the 4 thread finishes..Now..the threads are working but i see no improvement in their performance..normally it will take 1 min for 20 files(for example) after implementing threading ...there is only 5 or 3 sec difference., sometimes the same. here i will show the code for 1 thread(so it is for other3 threads) //main thread myClassObject->thread1 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction1,myClassObject); .... HANDLE threadHandles[4]; threadHandles[0] = myClassObject->thread1->m_hThread; .... WaitForSingleObject(myClassObject->thread1->m_hThread,INFINITE); UINT MyThreadFunction(LPARAM lparam) { CMerger* myClassObject = (CMerger*)lparam; CString outputPath = myClassObject->compressedFilePath.GetAt(0);//contains the o/p path wchar_t* compressInputData[] = {myClassObject->thread1outPath, COMPRESS,(wchar_t*)(LPCTSTR)(outputPath)}; HINSTANCE loadmyDll; loadmydll = LoadLibrary(myClassObject->thread1outPath); fp_Decompress callCompressAction = NULL; int getCompressResult=0; myClassObject->MyCompressFunction(compressInputData,loadClient7zdll,callCompressAction,myClassObject->thread1outPath, getCompressResult,minIndex,myClassObject->firstThread,myClassObject); return 0; }

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • (php) regexto remove comments but ignore occurances within strings

    - by David
    Hi there, I am writing a comment-stripper and trying to accommodate for all needs here. I have the below stack of code which removes pretty much all comments, but it actually goes too far. A lot of time was spent trying and testing and researching the regex patterns to match, but I don't claim that they are the best at each. My problem is that I also have situation where I have 'PHP comments' (that aren't really comments' in standard code, or even in PHP strings, that I don't actually want to have removed. Example: <?php $Var = "Blah blah //this must not comment"; // this must comment. ?> What ends up happening is that it strips out religiously, which is fine, but it leaves certain problems: <?php $Var = "Blah blah ?> Also: will also cause problems, as the comment removes the rest of the line, including the ending ? See the problem? So this is what I need... Comment characters within '' or "" need to be ignored PHP Comments on the same line, that use double-slashes, should remove perhaps only the comment itself, or should remove the entire php codeblock. Here's the patterns I use at the moment, feel free to tell me if there's improvement I can make in my existing patterns? :) $CompressedData = $OriginalData; $CompressedData = preg_replace('!/\*.*?\*/!s', '', $CompressedData); // removes /* comments */ $CompressedData = preg_replace('!//.*?\n!', '', $CompressedData); // removes //comments $CompressedData = preg_replace('!#.*?\n!', '', $CompressedData); // removes # comments $CompressedData = preg_replace('/<!--(.*?)-->/', '', $CompressedData); // removes HTML comments Any help that you can give me would be greatly appreciated! :)

    Read the article

  • php parsing csv with ftell

    - by Robert82
    I have a 500mb csv file with over 500,000 lines, each with 80 fields. I am using fget to process the file line by line. $col1 = array(); while (($row = fgetcsv($handle, 1000, ",")) !== FALSE) { $col1[] = $row[0]; } Because of an execution time limit on the PHP file by my hosting provider (120 seconds), I can't process the whole file in one run. I tried using ftell() and fseek() to remember the last position for restart. The trouble is, sometimes the ftell() position is in the middle of a row, and resuming means missing the first half of the row. Is there an elegant way to know the last line successfully processed, and resume from the one after it? I realize I can do a simple counter, and then loop through to that point again, but that would produce diminishing returns on the rows I can process towards the end of the file. Is there something like ftell() and fseek() that would work in my case? Or a way to limit ftell() to return the pointer for the end of the previous line?

    Read the article

  • Powershell wait for file to delete, then copy a folder

    - by user3317623
    Morning guys, I have a couple of scripts that have to sync a folder from the network server, to the local terminal server, and lastly into the %LOCALAPPDATA%. I need to first check if a folder is being synced (this is done by creating a temporary COPYING.TXT on the server), and wait until that is removed, THEN copy to %LOCALAPPDATA%. Something like this: Server-side script executes, which syncs my folder to all of my terminal servers. It creates a COPYING.TXT temporary file, which indicates the sync is in progress. Once the sync is finished, the script removes the COPYING.TXT If someone logs on during the sync, I need a script to wait until the COPYING.TXT is deleted I.E the sync is finished, then resume the local sync into their %LOCALAPPDATA%. do{cp c:\folder\program $env:LOCALAPPDATA} while(!(test-path c:\folder\COPYING.txt)) (So that copies the folder while the file DOESN'T exist, but I don't think that exits cleanly) I cannot format the above as code for some reason I'm sorry? Or: while(!(test-path c:\folder\COPYING.txt)){ cp c:\folder\program $env:LOCALAPPDATA\ -recurse -force if (!(test-path c:\folder\program)){return} } But that script quits if the COPYING.TXT exists. I think I need to create a function and insert that function within itself, or a nested while loop, but that is starting to make my head hurt. Any help would be greatly appreciated. Thanks guys.

    Read the article

  • Find existence of number in a sorted list in constant time? (Interview question)

    - by Rich
    I'm studying for upcoming interviews and have encountered this question several times (written verbatim) Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M N and N large enough to span multiple disks. Algorithm to beat O(log n); bonus points for constant time algorithm. First of all, I'm not sure if this is a question with a real solution. My colleagues and I have mused over this problem for weeks and it seems ill formed (of course, just because we can't think of a solution doesn't mean there isn't one). A few questions I would have asked the interviewer are: Are there repeats in the sorted list? What's the relationship to the number of disks and N? One approach I considered was to binary search the min/max of each disk to determine the disk that should hold that number, if it exists, then binary search on the disk itself. Of course this is only an order of magnitude speedup if the number of disks is large and you also have a sorted list of disks. I think this would yield some sort of O(log log n) time. As for the M N hint, perhaps if you know how many numbers are on a disk and what the range is, you could use the pigeonhole principle to rule out some cases some of the time, but I can't figure out an order of magnitude improvement. Also, "bonus points for constant time algorithm" makes me a bit suspicious. Any thoughts, solutions, or relevant history of this problem?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >