Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 372/595 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Fighing system in Php & MYSQL

    - by Gully
    I am working on a game like Mafia Wars and i am trying to get the fighting system working but i keep getting lose trying to work out who is going to win the fight and it still needs to know if the stats are close then there is a random chace of them winning. $strength = $my_strength; $otherplayerinfo = mysql_query("SELECT * FROM accounts WHERE id='$player_id'"); $playerinfo = mysql_fetch_array($otherplayerinfo); $players_strength = $playerinfo['stre']; $players_speed = $playerinfo['speed']; $players_def = $playerinfo['def']; if($players_strength > $strength){ $strength_point_player = 1; $strength_point_your = 0; }else{ $strength_point_your = 1; $strength_point_player = 0; } I was trying a point system but i still could not do it.

    Read the article

  • Inheritance - initialization problem

    - by dumbquestion
    I have a c++ class derived from a base class in a framework. The derived class doesn't have any data members because I need it to be freely convertible into a base class and back - the framework is responsible for loading and saving the objects and I can't change it. My derived class just has functions for accessing the data. But there are a couple of places where I need to store some temporary local variables to speed up access to data in the base class. mydata* MyClass::getData() { if ( !m_mydata ) { // set to NULL in the constructor m_mydata = some_long_and complex_operation_to_get_the_data_in_the_base() } return m_mydata; } The problem is if I just access the object by casting the base class pointer returned from the framework to MyClass* the ctor for MyClass is never called and m_mydata is junk. Is there a way of only initializing the m_mydata pointer once?

    Read the article

  • is it good practice to use iframe to implement header/navbar?

    - by Xah Lee
    is it good practice to use iframe to implement header/navbar? my website is basically 5 thousand pages but all static html (not using any content manager, php, etc.). am in the process to add a navbar at the top of each page. e.g. tabs, or crumbs, or any sort of header with js menu. (the exact design not decided yet) my question is, is it good practice to use a iframe for this? (so, instead have same text repeated in all 5 thousand pages, each will just have a short iframe pointing to a header file) am aware that one should reduce http request for speed, but this is ok with me. Any other problems i might have with this? SEO or any tech issue?

    Read the article

  • Measure CPU performance via JS

    - by Nicholas Kyriakides
    A webapp has as a central component a relatively heavy algorithm that handles geometric operations. There are 2 solutions to make the whole thing accessible from both high-end machines and relatively slower mobile devices. I will use RPC's if i detect that the user machine is ''slow'' or else if i detect that the user machine can handle it OK, then i provide to the webapp the script to handle it client side. Now what would be a reliable way to detect the speed of the user machine? I was thinking of providing a sample script as a test when the page loads and detect the time it took to execute that. Any ideas?

    Read the article

  • Using multiple databases within one application (ASP.NET MVC, LINQ to SQL)

    - by Alex
    I have a web application made for several groups of people not connected with each other. Instead of using one database for all of them, I'm thinking about making separate databases. This will improve the speed of the queries and make me free from checking to what group the user belongs. But since I'm working with LINQ to SQL, my classes are explicitly connected with the databases, so I will have to make separate DataContexts for all of the databases. So how can I solve this problem? Or should I just not bother and use one database only?

    Read the article

  • Aggregate Functions in Index with IBMDB2

    - by Erkan
    Is there any way to pre aggregate results of aggregate functionts (f.i. count()) and store it in an index? The background is: i want to speed up count() queries. So that: Select count(users) from TE123 where region = 'A'; would be supported by an index like Region Count(Users) A 548 E 458 I know that MQTs would also help for this problem. However, in this case it is not possible to use MQT, as we use kind of an ORM and we don't want to define Entities on MQTs. I just slightly remember - one DBA told me - that there is such a function planned for DB2 V10.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • I need to pad IP addresses with Zeroes for each octet

    - by Felipe Alvarez
    Starting with a string of an unspecified length, I need to make it exactly 43 characters long (front-padded with zeroes). It is going to contain IP addresses and port numbers. Something like: ### BEFORE # Unfortunately includes ':' colon 66.35.205.123.80-137.30.123.78.52172: ### AFTER # Colon removed. # Digits padded to three (3) and five (5) # characters (for IP address and port numbers, respectively) 066.035.05.123.00080-137.030.123.078.52172 This is similar to the output produced by tcpflow. Programming in Bash. I can provide copy of script if required. If it's at all possible, it would be nice to use a bash built-in, for speed. Is printf suitable for this type of thing?

    Read the article

  • Optional URL fragment in Codeigniter?

    - by DA
    This is maybe a simple syntax question or possibly a best-practices question in terms of codeigniter. I should start by saying I'm not a PHP or Codeigniter person so trying to get up to speed on helping on a project. I've found the CI documentation fairly good. The one question I can't find an answer to is how to make part of a URL optional. An example the CI documentation uses is this: example.com/index.php/products/shoes/sandals/123 and then the function used to parse the URI: function shoes($sandals, $id) For my example, I'd like to be able to modify the URL as such: example.com/index.php/products/shoes/all So, if no ID is passed, it's just ignored. Can that be done? Should that be done? A second question unrelated to my problem but pertaining to the example above, why would the variable $sandals be used as in the example, the value is 'sandals'? Shouldn't that variable be something like $shoetype?

    Read the article

  • How to merge branches in Git by "hunk"

    - by user1316464
    Here's the scenario. I made a "dev" branch off the "master" branch and made a few new commits. Some of those changes are going to only be relevant to my local development machine. For example I changed a URL variable to point to a local apache server instead of the real URL that's posted online (I did this for speed during the testing phase). Now I'd like to incorporate my changes from the dev branch into the master branch but NOT those changes which only make sense in my local environment. I'd envisioned something like a merge --patch which would allow me to choose the changes I want to merge line by line. Alternatively maybe I could checkout the "master" branch, but keep the files in my working directory as they were in the "dev" branch, then do a git add --patch. Would that work?

    Read the article

  • getting the value of a filter at an arbitrary time

    - by Andiih
    Context: I'm trying to improve the values returned by the iPhone CLLocationManager, although this is a more generally applicable problem. The key is that CLLocationManger returns data on current velocity as and when it feels like it, rather than at a fixed sample rate. I'd like to use a feedback equation to improve accuracy v=(k*v)+(1-k)*currentVelocity where currentVelocity is the speed returned by didUpdateToLocation:fromLocation: and v is the output velocity (and also used for the feedback element). Because of the "as and when" nature of didUpdateToLocation:fromLocation: I could calculate the time interval since it was last called, and do something like for (i=0;i<timeintervalsincelastcalled;i++) v=(k*v)+(1-k)*currentVelocity which would work, but is wasteful of cycles. Especially as I probably want timeintervalsincelastcalled to be measured as 10ths of a second. Is there a way to solve this without the loop ? i.e. rework (integrate?) the formula so I put an interval into the equation and get the same answer as I would have by iteration ?

    Read the article

  • How to conditionalize GUI tests using Netbeans/Maven vs maven on command line invocation

    - by Ilane
    I'd like to have a single project pom but have my GUI tests always run when I'm invoking JUnit on Netbeans, but have them conditional (on an environment variable?) when building on the command line (usually for production build - on a headless machine, but sometimes just for build speed). I don't mind instrumenting my JUnit tests for this, as I already have to set up my GUI test infrastructure, but how do I conditionalize my pom! Netbeans 6.5 with Maven plugin. Any ideas how I can accomplish this? Ilane

    Read the article

  • jquery mouseleave issue when moving too slow

    - by David
    Hello. I am using the jQuery mouseenter and mouseleave events to slide a div down and up. Everything works well except for the mouseleave which doesn't appear to fire ONLY if the mouse of moved off of the div quite slowly. If i move the mouse at a relatively normal or fast speed then it works as expected. Can anyone explain this or provide any info on how to get around this? Code: $(document).ready(function() { $('header').mouseenter(function() { $(this).stop().animate({'top' : '25px'}, 500, function() { $(this).delay(600).animate({'top' : '-50px'}, 500); }); }). mouseleave(function(e) { var position = $(this).position(); if (e.pageY > position.top + $(this).height()) { $(this).stop().delay(600).animate({'top' : '-75px'}, 500) ; } }); });

    Read the article

  • Is there an alternative to FTP?

    - by Danny
    I am trying to find an alternative to FTP? It's a single file transfer up to 4gb. Any suggestions? maybe HTTP? Or should I stick it out with FTP? More info - We have an app that we distribute to tens of thousands of clients that upload single large files. FTP has proven to be error prone with a single file of that size. Speed is always a consideration. 'Resume' is a must. Cost shouldn't be an issue - I guess it depends.

    Read the article

  • database row/ record pointers

    - by David
    Hi I don't know the correct words for what I'm trying to find out about and as such having a hard time googling. I want to know whether its possible with databases (technology independent but would be interested to hear whether its possible with Oracle, MySQL and Postgres) to point to specific rows instead of executing my query again. So I might initially execute a query find some rows of interest and then wish to avoid searching for them again by having a list of pointers or some other metadata which indicates the location on a database which I can go to straight away the next time I want those results. I realise there is caching on databases, but I want to keep these "pointers" else where and as such caching doesn't ultimately solve this problem. Is this just an index and I store the index and look up by this? most of my current tables don't have indexes and I don't want the speed decrease that sometimes comes with indexes. So whats the magic term I've been trying to put into google? Cheers

    Read the article

  • Huge framerate difference between Test and Publish movie in Flash?

    - by Glacius
    Simply put, I am making a flash midi player. I am using ENTER_FRAME for my timings. I set the framerate to 100 to ensure that the timing of each note in milliseconds is accurate. When I test the movie with CTRL + ENTER it works fine. When I publish it and open it in a browser (tested both IE and Chrome), it suddenly plays back a lot slower. I don't think it's a performance issue, since the code is very simple. If this slowdown is consistent then I can perhaps work with it so that the playback speed will be correct. Do browsers make the framerate slower or do they implement a framerate cap of some sort? What is going on?

    Read the article

  • Java Or C++ Or What???

    - by Kronass
    Hi, My friends and I are starting a new project and we are shifting from windows to linux (for some reasons) and all of us are .Net background. for the new platform I decided to go with Java since many parts are similar with .Net but my friend is insisting on C++ saying it is much faster very mature and working with it will not effect on the productivity and development speed. The project that we will work on it will have threading, extensive string and datetime manipulation, some socket programing and of-course work with RDBMS (MySql Or Postgre not decided yet). I have some fears with java since oracle acquired sun and these people will do anything to make money out of it. some have advised in python and ruby and I like python but don't know should I make it the default language in this project. the project is not web application and we will make services and executables. what do you think, if you have other opinion you very welcome. Hint: Mono is not an option

    Read the article

  • Android: Touch seriously slowing my application

    - by Jason Rogers
    Hi all, I've been raking my brains on this one for a while. when I'm running my application (opengl game) eveyrthing goes fine but when I touch the screen my application slows down quite seriously (not noticeable on powerful phones like the nexus one, but on the htc magic it gets quite annoying). I did a trace and found out that the touch events seem to be handled in a different thread and even if it doesn't take so much processing time I think androids ability to switch between threads is not so good... What is the best way to handle touch when speed is an issue ? Currently I'm using : in the GLSurfaceView @Override public boolean onTouchEvent(MotionEvent event) { GameHandler.onTouchEvent(event); return true; } Any ideas are welcome

    Read the article

  • How can you get the first digit in an int (C#)?

    - by Dinah
    In C#, what's the best way to get the 1st digit in an int? The method I came up with is to turn the int into a string, find the 1st char of the string, then turn it back to an int. int start = Convert.ToInt32(curr.ToString().Substring(0, 1)); While this does the job, it feels like there is probably a good, simple, math-based solution to such a problem. String manipulation feels clunky. Edit: irrespective of speed differences, mystring[0] instead of Substring() is still just string manipulation

    Read the article

  • elastic / snaking line algorithm

    - by vhdirk
    Hi everyone I am making a graphics application in which I can edit a polyline by dragging the control point of it. However, I'd like to make it a bit easier to use by making it elastic; When dragging a control point, instead of moving a single point, I'd like the points within a certain distance of that point to be moved as well, depending on how hard the control point is 'pulled'. Does anyone know a simple algorithm for this? It may be quite rudimentary, as the primary requirement is speed. Actually, knowing how to call such behaviour would also be nice, so I can look it up on google. I tried 'snaking' line, but that seems to refer to active contours, which isn't what I'm looking for. Thanks

    Read the article

  • Netbeans has Realy Bad FTP support , Forces me to download all Files even before I can continue

    - by Vivek
    Hello Friends , have started using Netbeans recently after using Aptana, phpdesigner and Notepad++ . I love Netbeans for it's speed and it has almost everything I want except for the fact that the FTP support is really Bad. To start working on a FTP server , I have to download all the files to my localhost first which is such a waste of bandwidth & if those files are many ie. 1000+ files then it's really annoying . I have tried mounting remote FTP as local filesystem in Windows and then using Netbeans to access it but that does'nt work out too .. If anybody using netbeans a lot for PHP development can guide me on this , then I would be highly obliged.. this trivial problem is keeping me from using this awesome IDE . Thanks & Regards .

    Read the article

  • Is html/javascript equivalent to as3/flex?

    - by DJ.
    Hello my fellow coders, As i notice for a while now (like everybody else in the industry), the RIA market is shifting from AS3/Flex to HTML/Javascript. What i would like to know is? Is html/javascript as powerfull as as3/Flex or are they entirely different. With other words can i build the exact same applictions with HTML(4/5) and Javascript as i can do with AS3/Flex? I'm not looking for the speed comparison? or bashing one technology over the other? I just want to know if is good for me to dive into javascript, JQuery...... PS. If there is a nother post on stackoverflow with the exacte question. please share the link. Thanks. Thank you.

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

  • MySQL thousands of updates, slowing down.

    - by noryb009
    I need to run a PHP loop for a total of 100, 000 times (about 10, 000 each script-run), and each loop has about 5 MySQL UPDATES to it. When I run the loop 50 times, it takes 3 sec. When I run the loop 1000 times, it takes about 1300 sec. As you can see, MySQL is slowing down ALOT with more UPDATEs. This is an example update: mysql_query("UPDATE table SET `row1`=`row1` +1 WHERE `UniqueValue`='5'"); This is generated randomly from PHP, but I can store it in a variable and run it every n loops. Is there any way to either make MySQL and PHP run at a consistent speed (is PHP storing hidden variables?), or split up the script so they do? Note: I am running this for a development purposes, not for production, so there will only be 1 computer accessing the data.

    Read the article

  • HTML Line Spacing & Compact Code

    - by William Hand
    I was just wondering if there was a professional opinion on the matter of compact code. Does it really speed up page loading. Example: <body> <div id="a"></div> <div id="b"></div> </body> VS <body> <div id="a"></div> <div id="b"></div> </body> any ideas?

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >