Search Results

Search found 96031 results on 3842 pages for 'mysql server'.

Page 209/3842 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Cannot Display Data from MySQL table

    - by MxmastaMills
    I've got a pretty standard call to a MySQL database and for some reason I can't get the code to work. Here's what I have: $mysqli = mysqli_connect("localhost","username","password"); if (!$mysqli) { die('Could not connect: ' . mysqli_error($mysqli)); } session_start(); $sql = "SELECT * FROM jobs ORDER BY id DESC"; $result = $mysqli->query($sql); $num_rows = mysqli_num_rows($result); Now, first, I know that it is connecting properly because I'm not getting the die method plus I added an else conditional in there previously and it checked out. Then the page displays but I get the errors: Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in blablabla/index.php on line 11 Warning: mysqli_fetch_array() expects parameter 1 to be mysqli_result, boolean given in blablabla/index.php on line 12 I've double-checked my database and there is a table called jobs with a row of "id" (it's the primary row). The thing that confuses me is this is code that I literally copied and pasted from another site I built and for some reason the code doesn't work on this one (I obviously copy and pasted it and then just changed the table name and rows accordingly). I saw the error and tried: $num_rows = $mysqli_result->num_rows; $row_array = $mysqli_result->fetch_array; and that fixed the errors but resulted in no data being passed (because obviously $mysqli_result has no value). I don't know why the error is calling for that (is it a difference in version of MySQL or PHP from the other site)? Can someone help me track down the problem? Thanks so much. Sorry if it's something super simple that I'm overlooking, I've been at it for a while.

    Read the article

  • Mysql configuration problem

    - by jazzrai
    I have being trying since last night. At first it was working but this morning again its not working. I am installing mysql version 5.0 on vista machine. when i try to configure its says that: The security settings could not be applied to the database because the connnection had failed with the following error: Error Nr. 1045 access denied for user 'root'@'localhost'(using password:yes) if a personal firewall is runnig on your machine plaes make sure you have opened the tcp port 3306 for connections. otherwise no client applicaion can connect to the server. after you have opened the port please press retry to apply the secirity settings. if you are re-installing after you just installed the mysql server please not that the data directory was not removed automatically. therefore the old password from your last installation is still needed to connect to the server. in this case please select skip now and re-run the configuration wizard from the start menu. i tried disabling the wirefall, user accounts but getting the same error. can anyone suggest me something please.

    Read the article

  • Unable to change two things about a single row in mysql with php

    - by user1624005
    Here's the code: $id = intval($_POST['id']); $score = "'" . $_POST['score'] . "'"; $shares = "'" . $_POST['shares'] . "'"; $conn = new PDO('mysql:host=localhost;dbname=news', 'root', ''); $stmt = $conn->prepare("UPDATE news SET 'shares' = :shares, 'score' = :score WHERE id = :id"); $stmt -> execute(array( 'shares' => $shares, 'score' => $score, 'id' => $id )); And it doesn't work. I am unsure as to how I would see the error that I assume mysql is giving somewhere, and I've tried everything I could think of. Using double quotes and adding the variables into the statement right away. Adding single quotes to shares and score. How am I supposed to be doing this?

    Read the article

  • Ubuntu + LigHTTPd: Server requests taking ages

    - by ctrl_freak
    I've had an issue since upgrading my distro a couple of weeks ago from hardy; receiving data after making a request has increasing intervals of nothing, as you can see from the picture below. http://i49.tinypic.com/2w5lvr9.png I have since reinstalled fresh from an Ubuntu 10.04 Server (i386) disk, but am still having the same issues. I'm running on a LigHTTPd, MySQL, PHP5 stack. The surprising thing is, that local browsing using lynx is super fast, as expected. Initially, after reinstalling, I copied over the old configuration files from the previous installation, but have since reinstalled LigHTTPd and rebuilt the config file from scratch. The only correlation I could find, was that I attempted installation of ionCube and Zend Optimizer for a script I was testing, however I would think that it could no longer impact seeing I had reinstalled the OS. I have also removed Suhosin just in case, however it had no impact. I'm thinking it possibly has something to do with networking, but I wouldn't know where to start. The server is manually assigned an IP by it's MAC address on the router. The fact that the time seems to be exponential (to a point) worries me. I've tried strace'ing the LigHTTPd and MySQL processes, however I couldn't see anything obvious, not that I'd really know what I'm looking for. RAM and CPU usage don't seem to be out of the ordinary, but I can't say its perfect.. I'm hoping someone has experienced the same, or can point me in a direction, as searching has proved fruitless as I don't know anything specific. Config files can be posted, if requested.

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • Django-pyodbc SQL Server/freetds server connection problems on linux

    - by wizard
    Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnectW)') I'm migrating from developing on a windows development machine to Linux machine in production and I'm having issues with the freetds driver. As far as I can tell that error message means it can't find the driver. I can connect via the cli via sqsh and tsql. I've setup my settings.py as such. 'bc2db': { 'ENGINE': 'sql_server.pyodbc', 'NAME': 'DataTEST', 'USER': 'appuser', 'PASSWORD': 'PASS', 'HOST': 'bc2.domain.com', 'options': { 'driver': 'FreeTDS', } }, Does anyone have any SQL Server experience with django? do I have to use a dns? (how would I format that?)

    Read the article

  • Unable to send email using 3rd party server ( IIS 7, Windows Server 2008, ASP.NET )

    - by Reed
    Hello All, I am using IIS 7 on Server 2008. I just tried migrating my app from an older platform - everything works fine, except the email feature. This is my config: < mailSettings > < smtp from="[email protected]" deliveryMethod="Network" > < network host="mail.xyz.com" port="25" userName="[email protected]" password="123" /> < / smtp > < /mailSettings > Whenever I need to send an email I use: SmtpClient smtp = new SmtpClient(); smtp.Send(email); The funny thing is I get absolutely no errors, however the email is never sent. The outbound firewall ruleset allows SMTP traffic. Any idea what I did wrong?

    Read the article

  • Is SSIS able to query flat files from another Windows Server?

    - by atricapilla
    I pretty new SQL Server Integration Server (SSIS) user. Is SSIS able to query data from text files located in another Windows Server? I mean that when SSIS is installed on Windwos Server A, is SSIS able to query data from e.g. one folder containing text files in Windows Server B (under same domain)? I have used only SAP BO Data Integrator ETL tool and it cannot query flat files from another Server: during execution, all files must be located on the Job Server machine that executes the job.

    Read the article

  • mysql connector/net ssl shutsdown the server

    - by Simon
    Hello, when I try to connect my server throw connector/net using ssl with pfx certificate I had problem with establishing the connection. I get connection timeout. And the server probably fall down (I dont know it for sure, becouse I dont manage the server). On the Windows XP works all right, but on Windows 7 dont. Please, where is problem? In Windows 7 or on the server (mysql 5.0)? Sometimes I get "Calling interface SSPI Failed" error, but not everytime. Sometimes is only connection timeout error. Thank you a lot for any help. Regards, simon

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • Mysql table data problem?

    - by DaTeNtImE
    I'm new to mysql and was wondering how can I add the users birthdate in the following HTML format to the MYSQL table data listed below? How would the structure look like for example email VARCHAR(80) NOT NULL,? Here is the HTML code below. <li><label>Date of Birth: </label> <label for="month">Month: </label> <select name="month" id="month"> <option value="January">January</option> <option value="February">February</option> <option value="March">March</option> <option value="April">April</option> <option value="May">May</option> <option value="June">June</option> <option value="July">July</option> <option value="August">August</option> <option value="September">September</option> <option value="October">October</option> <option value="November">November</option> <option value="December">December</option> </select> <label for="day">Day: </label> <select id="day" name="day"> <option value="0" selected="selected">Day</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> <option value="13">13</option> <option value="14">14</option> <option value="15">15</option> <option value="16">16</option> <option value="17">17</option> <option value="18">18</option> <option value="19">19</option> <option value="20">20</option> <option value="21">21</option> <option value="22">22</option> <option value="23">23</option> <option value="24">24</option> <option value="25">25</option> <option value="26">26</option> <option value="27">27</option> <option value="28">28</option> <option value="29">29</option> <option value="30">30</option> <option value="31">31</option> </select> <label for="year">Year: </label><input type="text" name="year" id="year" /></li> Here is the MySQL table data. CREATE TABLE users ( user_id INT UNSIGNED NOT NULL AUTO_INCREMENT, first_name VARCHAR(20) NOT NULL, last_name VARCHAR(40) NOT NULL, email VARCHAR(80) NOT NULL, pass CHAR(40) NOT NULL, user_level TINYINT(1) UNSIGNED NOT NULL DEFAULT 0, active CHAR(32), registration_date DATETIME NOT NULL, PRIMARY KEY (user_id), UNIQUE KEY (email), INDEX login (email, pass) );

    Read the article

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • Delphi and mysql - Unable to connent to server..maybe custom connection reqd

    - by Steve
    I am coding an application for my company wherein i want to parse the results of a mysql query and display them in my application but i am facing a problem conecting to the database. the ip address of the server is : 172.30.192.20 and before i can ping it i have to add route on my pc something like this route add 172.30.192.0 mask 255.255.255.0 172.30.192.56 where 172.30.192.56 is the gateway Now whenever i try to connect 172.30.192.20 which is where the sql server is running my appplication instead connects to 172.30.192.56 i am coding the application in delphi and have used TmySQL After this didnt workout i tried an application called SQLwave. I just entered the server ip address and was able to connect to the database without any problems. it seems sqlwave uses mydac which is why even i tried using it but using the default connection options and setting i was still not able to connect. it seems sqlwave uses a custom connection using mydac i just want to know whats going wrong with my connection

    Read the article

  • SQL Joining Two or More from Table B with Common Data in Table A

    - by Matthew Frederick
    The real-world situation is a series of events that each have two or more participants (like sports teams, though there can be more than two in an event), only one of which is the host of the event. There is an Event db table for each unique event and a Participant db table with unique participants. They are joined together using a Matchup table. They look like this: Event EventID (PK) (other event data like the date, etc.) Participant ParticipantID (PK) Name Matchup EventID (FK to Event table) ParicipantID (FK to Participant) Host (1 or 0, only 1 host = 1 per EventID) What I'd like to get as a result is something like this: EventID ParticipantID where host = 1 Participant.Name where host = 1 ParticipantID where host = 0 Participant.Name where host = 0 ParticipantID where host = 0 Participant.Name where host = 0 ... Where one event has 2 participants and another has 3 participants, for example, the third participant column data would be null or otherwise noticeable, something like (PID = ParticipantID): EventID PID-1(host) Name-1 (host) PID-2 Name-2 PID-3 Name-3 ------- ----------- ------------- ----- ------ ----- ------ 1 7 Lions 8 Tigers 12 Bears 2 11 Dogs 9 Cats NULL NULL I suspect the answer is reasonably straightforward but for some reason I'm not wrapping my head around it. Alternately it's very difficult. :) I'm using MYSQL 5 if that affects the available SQL.

    Read the article

  • Help with SQL Query

    - by djfrear
    With regards to the following statement: Select * From explorer.booking_record booking_record_ Inner Join explorer.client client_ On booking_record_.labelno = client_.labelno Inner Join explorer.tour_hotel tour_hotel_ On tour_hotel_.tourcode = booking_record_.tourrefcode Inner Join explorer.hotelrecord hotelrecord_ On tour_hotel_.hotelcode = hotelrecord_.hotelref Where booking_record_.bookingdate Not Like '0000-00-00' And booking_record_.tourdeparturedate Not Like '0000-00-00' And hotelrecord_.hotelgroup = "LPL" And Year(booking_record_.tourdeparturedate) Between Year(AddDate(Now(), Interval -5 Year)) And Year(Now()) My MySQL skills are certainly not up to scratch, the actual result set I wish to find is "a customer who has been to 5 or more LPL hotels in the past 5 years". So far I havent got as far as dealing with the count as I'm getting a huge number of results with some 250+ per customer. I assume this is to do with the way I'm joining tables. Schema wise the booking_record table contains a tour reference code, which links to tour_hotel which then contains a hotelcode which links to hotelrecord. This hotelrecord table contains the hotelgroup. The client table is joined to the booking_record via a booking reference and a client may have many bookings. If anyone could suggest a way for me to do this I'd be very grateful and hopefully learn enough to do it myself next time! I've been scratching my head over this one for a few hours now! Customers may have many bookings within booking_record Daniel.

    Read the article

  • Adding one subquery makes query a little slower, adding another makes it way slower

    - by Jason Swett
    This is fast: select ba.name, penamt.value penamt, #address_line4.value address_line4 from account a join customer c on a.customer_id = c.id join branch br on a.branch_id = br.id join bank ba on br.bank_id = ba.id join account_address aa on aa.account_id = a.id join address ad on aa.address_id = ad.id join state s on ad.state_id = s.id join import i on a.import_id = i.id join import_bundle ib on i.import_bundle_id = ib.id join (select * from unused where heading_label = 'PENAMT') penamt ON penamt.account_id = a.id #join (select * from unused where heading_label = 'Address Line 4') address_line4 ON address_line4.account_id = a.id where i.active=1 And this is fast: select ba.name, #penamt.value penamt, address_line4.value address_line4 from account a join customer c on a.customer_id = c.id join branch br on a.branch_id = br.id join bank ba on br.bank_id = ba.id join account_address aa on aa.account_id = a.id join address ad on aa.address_id = ad.id join state s on ad.state_id = s.id join import i on a.import_id = i.id join import_bundle ib on i.import_bundle_id = ib.id #join (select * from unused where heading_label = 'PENAMT') penamt ON penamt.account_id = a.id join (select * from unused where heading_label = 'Address Line 4') address_line4 ON address_line4.account_id = a.id where i.active=1 But this is slow: select ba.name, penamt.value penamt, address_line4.value address_line4 from account a join customer c on a.customer_id = c.id join branch br on a.branch_id = br.id join bank ba on br.bank_id = ba.id join account_address aa on aa.account_id = a.id join address ad on aa.address_id = ad.id join state s on ad.state_id = s.id join import i on a.import_id = i.id join import_bundle ib on i.import_bundle_id = ib.id join (select * from unused where heading_label = 'PENAMT') penamt ON penamt.account_id = a.id join (select * from unused where heading_label = 'Address Line 4') address_line4 ON address_line4.account_id = a.id where i.active=1 Why is it fast when I include just one of the two subqueries but slow when I include both? I would think it should be twice as slow when I include both, but it takes a really long time. On on MySQL.

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • Querying calender events even if they do not have any for the day

    - by StealthRT
    Hey everyone, i am trying to figure out a way of query my mysql server so that even if a company does not have anything posted for the day the user clicks on their logo, it still adds them to the list. That sounds a little confusing so let me try to explain it another way. Say i have 3 company's in my database: Comp1 Comp2 Comp3 And Comp1 & Comp3 have something for today on the calender but Comp2 does not. I still need it to populate and place that company on the page but have something along the lines of "nothing on the calender for today". The other 2 companys (Comp1 & Comp3) would show the calender posting for that day. This is the code i have right now: SELECT clientinfo.id, clientinfo.theCompName, clientinfo.theURL, clientinfo.picURL, clientinfo.idNumber, clientoffers.idNumber, clientoffers.theDateStart, clientoffers.theDateEnd FROM clientinfo, clientoffers WHERE clientinfo.accountStats = 'OPEN' AND clientinfo.idNumber = clientinfo.idNumber AND '2010-05-08' BETWEEN clientoffers.theDateStart AND clientoffers.theDateEnd GROUP BY clientinfo.idNumber ORDER BY clientinfo.theCompName ASC That executes just fine but for Comp2, it just places the calender info from Comp1 into it when it really doesn't have anything. The output looks like this: Comp1 | 2010-05-08 | this is the calender event 1 | etc etc Comp2 | 2010-05-08 | this is the calender event 1 | etc etc comp3 | 2010-05-09 | this is the calender event 2 | etc etc Any help would be great :o) David

    Read the article

  • Ultra-grand super acts_as_tree rails query

    - by Bloudermilk
    Right now I'm dealing with an issue regarding an intense acts_as_tree MySQL query via rails. The model I am querying is Foo. A Foo can belong to any one City, State or Country. My goal is to query Foos based on their location. My locations table is set up like so: I have a table in my database called locations I use a combination of acts_as_tree and polymorphic associations to store each individual location as either a City, State or Country. (This means that my table consists of the rows id, name, parent_id, type) Let's say for instance, I want to query Foos in the state "California". Beside Foos that directly belong to "California", I should get all Foos that belong every City in "California" like Foos in "Los Angeles" and "San Francisco". Not only that, but I should get any Foos that belong to the Country that "California" is in, "United States". I've tried a few things with associations to no avail. I feel like I'm missing some super-helpful Rails-fu here. Any advice?

    Read the article

  • Trying to build a dynamic PHP mysql_query string to update a row and getting back the updated row

    - by adardesign
    I have a form that jQuery tracks the onChage .change() event so when something is changed it runs a ajax request and i pass in the column, id, and the values in the url. Here i have the PHP code that should update the data. My question is now how do i build the mySQl string dynamically. and how do i echo back the changes/updates that where just changed on the db. Here is the PHP code i am trying to work with. <?php require_once('Connections/connect.php'); ?> <?php $id = $_GET['id']; $collumn = $_GET['collumn']; $val = $_GET['val']; ?> <?php mysql_select_db($myDB, $connection); // here i try to build the query string and pass in the passed in values $sqlUpdate = 'UPDATE `plProducts`.`allPens` SET `$collumn` = '$val' WHERE `allPens`.`prodId` = '$id' LIMIT 1;'; // here i want to echo back the updated row (or the updated data) $seeResults = mysql_query($sqlUpdate, $connection); echo $seeResults ?>

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to set Grub to automatically load Xen kernel

    - by Cerin
    How do you configure Grub to automatically use the Xen kernel under Ubuntu 11.10? No matter what I do, it loads the first menuentry. The only way I can get it to load Xen is to manually select the kernel, which I can't do if I have to reboot the server remotely, or there's a power failure and the machine automatically boots up when power's restored, etc. It's driving me nuts. In my /boot/grub/grub.cfg, the Xen kernel is at index 4 (i.e. it's the 5th menuentry). So I've tried: Setting GRUB_DEFAULT=4, and running sudo update-grub Setting GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true, and running sudo update-grub Setting GRUB_DEFAULT="Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server", and running sudo update-grub None of these work. It continues to load the first menuentry, which is "Ubuntu, with Linux 3.0.0-16-server". Below is my current /boot/grub/grub.cfg. What am I doing wrong? # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=2 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.0.0-16-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux /boot/vmlinuz-3.0.0-16-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro initrd /boot/initrd.img-3.0.0-16-server } menuentry 'Ubuntu, with Linux 3.0.0-16-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Linux 3.0.0-16-server ...' linux /boot/vmlinuz-3.0.0-16-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-16-server } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.0.0-12-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux /boot/vmlinuz-3.0.0-12-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro initrd /boot/initrd.img-3.0.0-12-server } menuentry 'Ubuntu, with Linux 3.0.0-12-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Linux 3.0.0-12-server ...' linux /boot/vmlinuz-3.0.0-12-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-12-server } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### submenu "Xen 4.1-amd64" { menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-16-server ...' module /boot/vmlinuz-3.0.0-16-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-16-server } menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-16-server ...' module /boot/vmlinuz-3.0.0-16-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro single echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-16-server } menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-12-server' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-12-server ...' module /boot/vmlinuz-3.0.0-12-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-12-server } menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-12-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-12-server ...' module /boot/vmlinuz-3.0.0-12-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro single echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-12-server } } ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >