Search Results

Search found 20838 results on 834 pages for 'mysql num rows'.

Page 228/834 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • cPanel web server redundancy advice?

    - by crgnz
    At present I operate a (reasonably low volume) web-hosting service with a Centos 5.3 server running cPanel/WHM. I would like to implement a level of redundancy such that in the event of server failure, I can restore service with a minimum of effort in less than 60 minutes. I also want to setup a secondary DNS that cPanel will replicate with. My current idea is to kill two birds with one stone by: My current server is called "www1" Purchase an identical server (HP DL360 G4) with mirrored disks. Call this server "www2" Install Centos 5.4 (or perhaps I should install 5.3 to be identical with www1) Install cPanel/WHM on this server and fully license it Setup www1 and www2 cPanel to replicate DNS with each other Setup a nightly replication script that does the following: a) rsync's the /home directory from www1 to www2 b) dumps all MySQL databases on www1 and copies them to a temp folder (with root access only) on www2 c) triggers a script to run on www2 that restores the MySQL dumps Thus each night a fully working copy of all the websites and MySQL databases is copied to www2. I do not have enough knowledge of MySQL replication to understand if it works safely and transparently with cPanel. Thus I propose the mysql dump/copy/restore due to not knowing any better! In the event that www1 dies a horrible death, I envisage that I could login to www2, change the IP addresses to those that www1 had, and presto, the websites are available again. The advantage of this idea is that it is fairly simple and "low tech" and thus does not require an expert sysadmin to setup and monitor (I am NOT an expert sysadmin) The disadvantage of this idea is that up to a full days worth of data changes would be lost. I think this would be acceptable to the sorts of customers I host at the moment. The other disadvantage would be having to pay for a full cPanel license, but I am comfortable with that cost, so for now all I want to discuss are technical considerations. Is this a sound scheme?

    Read the article

  • After writing SQL statements in MySQL, how to measure the speed / performance of them?

    - by Jian Lin
    I saw something from an "execution plan" article: 10 rows fetched in 0.0003s (0.7344s) How come there are 2 durations shown? What if I don't have large data set yet. For example, if I have only 20, 50, or even just 100 records, I can't really measure how faster 2 different SQL statements compare in term of speed in real life situation? In other words, there needs to be at least hundreds of thousands of records, or even a million records to accurately compares the performance of 2 different SQL statements?

    Read the article

  • Multiple rows with a single INSERT in SQLServer 2008

    - by Todd
    I am testing the speed of inserting multiple rows with a single INSERT statement. For example: INSERT INTO [MyTable] VALUES (5, 'dog'), (6, 'cat'), (3, 'fish) This is very fast until I pass 50 rows on a single statement, then the speed drops significantly. Inserting 10000 rows with batches of 50 take 0.9 seconds. Inserting 10000 rows with batches of 51 take 5.7 seconds. My question has two parts: Why is there such a hard performance drop at 50? Can I rely on this behavior and code my application to never send batches larger than 50? My tests were done in c++ and ADO.

    Read the article

  • What's the best way to store a MySQL database in source control?

    - by Marplesoft
    I am working on an application with a few other people and we'd like to store our MySQL database in source control. My thoughts are two have two files: one would be the create script for the tables, etc, and the other would be the inserts for our sample data. Is this a good approach? Also, what's the best way to export this information? Also, any suggestions for workflow in terms of ways to speed up the process of making changes, exporting, updating, etc.

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • Reporting Services is displaying extra rows for minimised row groups

    - by Graphain
    Hi, I have a fairly basic SQL Server Reporting Services report that is using nested row groups. Each sub-group depends on expanding its parent to be visible which is all pretty standard. The layout is something like this: { Company { { Car SUM(Price) { { { Part Price My desired result when expanded is something like this (which I get fine): - SuperCarCompany - SuperCar 20 Door 20 - SuperCar2 70 Door 30 Window 40 - OtherCarCompany - SuperCar2 50 /* Same SuperCar2 */ Door 50 - MoreCarCompany - BestCarEver 535 Engine 500 Door 30 Window 5 And when opened initially something like this: + SuperCarCompany + OtherCarCompany + MoreCarCompany However, I'm getting this: + SuperCarCompany + SuperCar2 70 (i.e. sum of all SuperCar2) + OtherCarCompany + SuperCar 20 + MoreCarCompany + BestCarEver 535 and I can even expand these superfluous rows like this: + SuperCarCompany - SuperCar2 70 (i.e. sum of all SuperCar2) Door 30 (i.e. first child of any SuperCar2) The superflous rows dissapear immediately when I expand the expected row above it (i.e. I'd need to expand all expected rows to get rid of all superflous rows). Any idea on the cause?

    Read the article

  • Removing rows in asp.net

    - by user279521
    I am attempting to remove rows on my asp.net page using the following code: try { Table t = (Table)Page.FindControl("Panel1").FindControl("tbl"); foreach (TableRow tr in t.Rows) { t.Rows.Remove(tr); } } catch (Exception e) { lblErrorMessage.Text = "Error - RemoveDynControls - " + e.Message; } however, I am getting an error on the (when the code loops the second time around) "Collection was modified; enumeration operation may not execute." Any ideas regarding what is causing the error message?

    Read the article

  • Is there a special character in mySql that would return always true in WHERE clauses?

    - by rm.
    Is there a character, say, $, SELECT * FROM Persons WHERE firstName='Peter' AND areaCode=$; such that the statement would return the same as SELECT * FROM Persons WHERE firstName='Peter' i.e. areaCode=$ would always return always true and, thus, effectively “turns of” the criteria areaCode=... I’m writing a VBA code in Excel that fetches some rows based on a number of criteria. The criteria can either be enabled or disabled. A character like $ would make the disabling so much easier.

    Read the article

  • PHP next MySQL row - how to move pointer until function checks true?

    - by ropbhardgood
    I have a PHP script which takes a value from a row in my MySQL database, runs it through a function, and if it determines it's true returns one value, and if it's false, it needs to go to the next value in the database and check that one until eventually one returns true. I think I need to use mysql_fetch_assoc, but I'm not really sure in what way to use it... I wish I could post my code to be more specific, but it's a lot of code and most of it has no bearing on this issue...

    Read the article

  • What's the different between these 2 mysql queries? one using left join

    - by Lyon
    Hi, I see people using LEFT JOIN in their mysql queries to fetch data from two tables. But I normally do it without left join. Is there any differences besides the syntax, e.g. performance? Here's my normal query style: SELECT * FROM table1 as tbl1, table2 as tbl2 WHERE tbl1.id=tbl2.table_id as compared to SELECT * FROM table1 as tbl1 LEFT JOIN table2 as tbl2 on tbl1.id=tbl2.id Personally I prefer the first style...hmm..

    Read the article

  • What should a Django user know when moving from MySQL to PostgreSQL?

    - by tmitchell
    Most of my experience with Django thus far has been with MySQL and mysqldb. For a new app I'm writing, I'm dipping my toe in the PostgreSQL water, now that I have seen the light. While writing a data import script, I stumbled upon an issue with the default autocommit behavior. I would guess there are other "gotchas" that might crop up. What else should I be on the lookout for?

    Read the article

  • Access 2007 file picker, replaces all rows with the same choice.

    - by SqlStruggle
    This code is from an Access 2007 project I've been struggling with. The actual mean part is the part where I should put something like "update only current form" DoCmd.RunSQL "Update Korut Set [PikkuKuva]=('" & varFile & "') ;" Could someone please help me with this?` If I use it now, it updates all the tables with the same file picked. Heres the whole code. ' This requires a reference to the Microsoft Office 11.0 Object Library. Dim fDialog As Office.FileDialog Dim varFile As Variant Dim filePath As String ' Set up the File dialog box. Set fDialog = Application.FileDialog(msoFileDialogFilePicker) With fDialog ' Allow the user to make multiple selections in the dialog box. .AllowMultiSelect = False ' Set the title of the dialog box. .Title = "Valitse Tiedosto" ' Clear out the current filters, and then add your own. .Filters.Clear .Filters.Add "All Files", "*.*" ' user picked at least one file. If the .Show method returns ' False, the user clicked Cancel. If .Show = True Then ' Loop through each file that is selected and then add it to the list box. For Each varFile In .SelectedItems DoCmd.SetWarnings True DoCmd.RunSQL "Update Korut Set [PikkuKuva]=('" & varFile & "') ;" Next Else MsgBox "You clicked Cancel in the file dialog box." End If End With

    Read the article

  • How can I optimize the import of this dataset in mysql?

    - by GeoffreyF67
    I've got the following table schema: CREATE TABLE `alexa` ( `id` int(10) unsigned NOT NULL, `rank` int(10) unsigned NOT NULL, `domain` varchar(63) NOT NULL, `domainStatus` varchar(6) DEFAULT NULL, PRIMARY KEY (`rank`), KEY `domain` (`domain`), KEY `id` (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 It takes several minutes to import the data. To me that seems rather slow as we're only talking about a million rows of data. What can I do to optimize the insert of this data? (already using disable keys) G-Man

    Read the article

  • Translating IPTables rule to UFW

    - by Dario Fumagalli
    we are using an Ubuntu 12.04 x64 LTS VPS. Firewall being used is UFW. I have setup a Varnish + LEMP setup. along with other things, including an Openswan IPSEC VPN from our office to the VPS data center. A second in house Ubuntu box is to act as MySQL slave and fetch data from the VPS through the VPN. Master's ppp0 is seen as 10.1.2.1 from the slave, they ping etc. I have done the various required tasks but I can't get the client (slave) MySQL (nor telnet 10.1.2.1 3306) to access the master through the VPN unless I issue this fairly obvious IPTables command: iptables -A INPUT -s 10.1.2.0/24 -p tcp --dport 3306 -j ACCEPT I willingly forced the accepted input to come from the last octet. With this rule everything works just fine! However I want to translate this command to UFW syntax so to keep everything in one place. Now I admit being inexperienced with UFW, I prepared rules like: ufw allow proto tcp from 10.1.2.0/24 port mysql and 2-3 variations involving specifying 3306 instead of mysql, specifying a target IP (MySQL's my.cnf at the moment is configured as 0.0.0.0) and similar but I just don't seem to be able to replicate the simple iptables rule in a functional way. Anyone could kindly give me a suggestion that is not to dump UFW? Thanks in advance.

    Read the article

  • Using the erlang mysql module, how is a database connection closed?

    - by Dan
    In using the erlang mysql module the exposed external functions are: %% External exports -export([start_link/5, start_link/6, start_link/7, start_link/8, start/5, start/6, start/7, start/8, connect/7, connect/8, connect/9, fetch/1, fetch/2, fetch/3, prepare/2, execute/1, execute/2, execute/3, execute/4, unprepare/1, get_prepared/1, get_prepared/2, transaction/2, transaction/3, get_result_field_info/1, get_result_rows/1, get_result_affected_rows/1, get_result_reason/1, encode/1, encode/2, asciz_binary/2 ]). From the this this, it is not apparent how to close a connection. How a connection closed?

    Read the article

  • Excel 2010 Move data from multiple columns/rows to single row

    - by frustrated529
    So frustrating! I get data sent to me and it looks like this: a 1 a 2 2 a 3 3 b 1 b 2 2 b 3 3 b 4 4 b 5 5 b 6 6 and I need it to look like this: a 1 2 2 3 3 b 1 2 2 3 3 4 4 5 5 6 6 I have about 30 columns that need to move to the top value in their group, then removing the duplicates (to which there are about 33 rows of duplicates, trying to get it down to about 8 rows). I have been searching forums for several days and trying bits and pieces of code. I am having such a tough time with VBA!!!! Same illustration, but graphically:     →

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >