Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 300/838 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Does CHECK TABLE add read/write locks?

    - by Ztyx
    Hi, Yesterday I ran CHECK TABLE on a table that is read very frequently. I scanned the MySQL documentation for CHECK TABLE for any mentions of "lock" (and found none) and also noticed that only SELECT privilege was required to run the command. I therefor concluded that the command did not do any read lock and was safe to run even in production. Sadly, running the command took 1 minute and 37 seconds and seemed to block all read access. My question is therefor, does CHECK TABLE do any read lock? Any other reason why I experienced a read block on the table? Thanks

    Read the article

  • Web Server Setup

    - by gustyaquino
    Hello, In my workplace, we want to implement your own web server for at leat 100 Apache/PHP/MySQL web pages. My boss is opposed to hiring skilled personnel, he think we can do ourselves. Currently, we are working with hostgator reseller account. I chose CentOS as the operating system, but I don't know the best hardware solution. HP, Dell ? What about the setup on these platforms? Thanks. PS: sorry for my bad english Edit: The purpose of this migration isn't related to performance issues. But independence.

    Read the article

  • Can I Store MediaWiki Files on the cloud?

    - by user219048
    I recently got a chromebook, and I've been brainstorming different ways to put mediawiki on it (with localhost, not a server). One way I've read about online is to go into developer mode to download and set up LAMP. I was wondering, wouldn't I be able to store the apache, mysql, php, and mediawiki files on the cloud (google drive)? And if so, would anything prevent me from accessing my wiki on any other computer's localhost, assuming I could just log into Google Drive to access these files? Might there be any reduced performance when operating from the cloud?

    Read the article

  • Inconsistent slow DB connect times

    - by mcryan
    I have an app that shows significantly increased load times every 5 minutes. We use New Relic and I've been able to identify that every slow request is due to our DB connect functions. I've looked through our frontend servers and db servers and have ensured that there are no cronjobs running every 5 minutes - though the increase comes every 5 minutes without fail. I feel like I've tried everything I can to isolate this but I'm not having any luck. It is not coming from a cronjob, there is no cache expiring every 5 minutes, etc. What else could I check, and does anyone know of anything that could cause this sort of behavior? For what it's worth - our stack includes Apache, PHP, MySQL, Varnish and Memcache. SQL and memcache are running on dedicated servers and we have a number of frontend servers behind a load balancer, each running Apache and Varnish. Spike every 5 minutes: And it's always db connect (in red) taking up all of the time:

    Read the article

  • minimum required bandwidth for remote database server

    - by user66734
    I want to build a small warehousing application for my company. We have a central warehouse which distributes to 8 sales points across the country. They insist on an in-house solution. I am thinking to setup a central mySQL db Linux server and have the branches connect to it to store sales. Queries to the db from the branches will be minimum, maybe 10 per hour. However I need all the branches to be able to store each sale data ( product ID, customer ID ) in the central db at peak time at most once every five minutes. My question is can I get away with simple 24mbps/768kbps DSL lines? If not what is the bandwith requirement? Can I rely on a load balancing router to combine additional lines if needed? Can you propose some server hardware specs?

    Read the article

  • InnoDB table locks without apparent reason

    - by Skreo
    Hi, I have an InnoDB table for visitors' counting, which has perfectly worked for several years, but it failed twice yesterday, may be cause of the increase of visitors Without apparent reason, this table locked, with hundreds of DELETE an REPLACE INTO queries (+500) with "updating" or "cleaning up" status. (I've no more the copy of the processlist...) This table contains few entries, between 500 and 1500, so the updating queries are usualy very fast and don't lock. I don't know where I must search to find the cause of this problem and resolve it definitively. But I guess this could give you a better vision of the problem : mysql> show global status like "%innodb_row_lock%"; +-------------------------------+-----------+ | Variable_name | Value | +-------------------------------+-----------+ | Innodb_row_lock_current_waits | 0 | | Innodb_row_lock_time | 132004175 | | Innodb_row_lock_time_avg | 10521 | | Innodb_row_lock_time_max | 59373 | | Innodb_row_lock_waits | 12546 | +-------------------------------+-----------+ 5 rows in set (0.00 sec) Sorry for my poor english, and thanks for your help ;-)

    Read the article

  • IIS 7.5 Site being redirected from hostname to IP

    - by TuxOtaku
    So here's the problem. I have a site in IIS that is being redirected from the site's hostname to its IP address. The problem is, I haven't even set up redirects at all for the site; and yet when I analyze the headers that come through as the page loads, I see clear as day, "302 Temporary Redirect". What could be causing this? I thought perhaps it was something in my application's DB (it's a PHP/MySQL application), but I have ruled that out. I also thought that it might be a rewrite rule somewhere, so I deleted all my rewrite rules as well.

    Read the article

  • Innodb statistics

    - by user64204
    We're running InnoDB as a MySQL engine and using phpMyadmin to administer our database. Under Status - Query statistics, phpMyadmin gives us the following: We would like to know where these figures come from because we would like to create a Munin graph showing the evolution of these statistics over time. When we run the SHOW STATUS; query here is what we get: Innodb_rows_deleted 247555 Innodb_rows_inserted 822911 Innodb_rows_read 694934413 Innodb_rows_updated 15048 As you can see there is a substantial difference although both were taking almost at the same time. Q: Do you know where phpMyadmin gets its values from?

    Read the article

  • installing php from source

    - by samsunggalaxyss
    Hi, i have a centos5.5 server on which i want to host Magento. However i don't want to use a control panel but to do everything myself. The base repo on centos provides php 5.1.6 which is too old for the application to use. If i download and compile from source the newest php, will i also have to install the newest apache and mysql? I wan't all the pieces to work well together. I am new at linux but i know enough to install these things from latest source. But say i install the new apache 2.2.17 which has document root at /usr/local/apache2/htdocs/ how do i tell php where that is, if you understand what i mean? Thanks for your help.

    Read the article

  • How to set up server to be accessed from mobile devices [closed]

    - by Kgrover
    I need to set up a server that will eventually be accessed from an Android application. I don't have experience with how to go about this, but I've seen MySQL servers (Amazon EC2). I would need to store data on the server remotely (again, from a mobile device) and fetch it to display. What would be the best kind of server to use? I'm guessing I would only need around 50 GB of space. Is it possible to use a network drive and set that up as a remote server with an IP address? I would need to upload and extract data through java on Android. This is my first question on ServerFault, and I'm not sure if it's the appropriate forum. If not, please redirect me.

    Read the article

  • Transient mysqlcheck errors about "size of datafile" (file too small)

    - by Adam Backstrom
    Running mysqlcheck on a live database is giving me transient errors like this one: mydatabase.mytable error : Size of datafile is: 500719688 Should be: 501000484 error : Corrupt When I run the command again or check the table one-off using mysql, it's listed as OK. Is this just a side effect of running checks on live tables? Is it possible that data is not flushed, hence the strange discrepancy? We moved several databases this morning by shutting down mysqld on the source and rsyncing files across to the new server, but these are all MyISAM tables so I don't believe the two things are related. (But I mention it just in case.)

    Read the article

  • MVC Persist Collection ViewModel (Update, Delete, Insert)

    - by Riccardo Bassilichi
    In order to create a more elegant solution I'm curios to know your suggestion about a solution to persist a collection. I've a collection stored on DB. This collection go to a webpage in a viewmodel. When the go back from the webpage to the controller I need to persist the modified collection to the same DB. The simple solution is to delete the stored collection and recreate all rows. I need a more elegant solution to mix the collections and delete not present record, update similar records ad insert new rows. this is my Models and ViewModels. public class CustomerModel { public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual IList<PreferredAirportModel> PreferedAirports { get; set; } } public class AirportModel { public virtual string Id { get; set; } public virtual string AirportName { get; set; } } public class PreferredAirportModel { public virtual AirportModel Airport { get; set; } public virtual int CheckInMinutes { get; set; } } // ViewModels public class CustomerViewModel { [Required] public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual IList<PreferredAirporViewtModel> PreferedAirports { get; set; } } public class PreferredAirporViewtModel { [Required] public virtual string AirportId { get; set; } [Required] public virtual int CheckInMinutes { get; set; } } And this is the controller with not elegant solution. public class CustomerController { public ActionResult Save(string id, CustomerViewModel viewModel) { var session = SessionFactory.CurrentSession; var customer = session.Query<CustomerModel>().SingleOrDefault(el => el.Id == id); customer.Name = viewModel.Name; // How cai I Merge collections handling delete, update and inserts ? var modifiedPreferedAirports = new List<PreferredAirportModel>(); var modifiedPreferedAirportsVm = new List<PreferredAirporViewtModel>(); // Update every common Airport foreach (var airport in viewModel.PreferedAirports) { foreach (var custPa in customer.PreferedAirports) { if (custPa.Airport.Id == airport.AirportId) { modifiedPreferedAirports.Add(custPa); modifiedPreferedAirportsVm.Add(airport); custPa.CheckInMinutes = airport.CheckInMinutes; } } } // Remove common airports from ViewModel modifiedPreferedAirportsVm.ForEach(el => viewModel.PreferedAirports.Remove(el)); // Remove deleted airports from model var toDelete = customer.PreferedAirports.Except(modifiedPreferedAirports); toDelete.ForEach(el => customer.PreferedAirports.Remove(el)); // Add new Airports var toAdd = viewModel.PreferedAirports.Select(el => new PreferredAirportModel { Airport = session.Query<AirportModel>(). SingleOrDefault(a => a.Id == el.AirportId), CheckInMinutes = el.CheckInMinutes }); toAdd.ForEach(el => customer.PreferedAirports.Add(el)); session.Save(customer); return View(); } } My environment is ASP.NET MVC 4, nHibernate, Automapper, SQL Server. Thank You!!

    Read the article

  • Parse filename, insert to SQL

    - by jakesankey
    Thanks to Code Poet, I am now working off of this code to parse all .txt files in a directory and store them in a database. I need a bit more help though... The file names are R303717COMP_148A2075_20100520.txt (the middle section is unique per file). I would like to add something to code so that it can parse out the R303717COMP and put that in the left column of the database such as: (this is not the only R number we have) R303717COMP data data data R303717COMP data data data R303717COMP data data data etc Lastly, I would like to have it store each full file name into another table that gets checked so that it doesn't get processed twice.. Any Help is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; namespace CSVImport { internal class Program { private static void Main(string[] args) { using (SQLiteConnection con = new SQLiteConnection("data source=data.db3")) { if (!File.Exists("data.db3")) { con.Open(); using (SQLiteCommand cmd = con.CreateCommand()) { cmd.CommandText = @" CREATE TABLE [Import] ( [RowId] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [FeatType] varchar, [FeatName] varchar, [Value] varchar, [Actual] decimal, [Nominal] decimal, [Dev] decimal, [TolMin] decimal, [TolPlus] decimal, [OutOfTol] decimal, [Comment] nvarchar);"; cmd.ExecuteNonQuery(); } con.Close(); } con.Open(); using (SQLiteCommand insertCommand = con.CreateCommand()) { insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, Comment) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @Comment);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string[] files = Directory.GetFiles(Environment.CurrentDirectory, "TextFile*.*"); foreach (string file in files) { string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] {','}); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.ExecuteNonQuery(); } } } con.Close(); } } } }

    Read the article

  • insert xml into sql server

    - by JonDog
    ok, so I know there are a bunch of other post on importing xml to sql server but I just cant seem to figure it out. I think my problem may have to do with having multi-levels. Can anyone help please. My xml, sql table, and the code i've tried. thanks in advance. <items> <item> <sku> <value> the_sku </value> </sku> <short_dis2> <value> short_discription2 <value/> </short_dis2> <short_dis1> <value> short_discription1 </value> </short_dis1> <title> <value> product_title </value> </title> <detailSpec> <value> detailed_specification_html </value> </detailSpec> <basicSpec> <value> basic_overview_html </value> </basicSpec> <basicSpecHeading> <value> besic_spec_heading </value> </basicSpecHeading> <detailSpecHeading> <value> detailed_specifications_heading </value> </detailSpecHeading> <model> <value> the_model_number </value> </model> <image_file_name> <value> the_image_url </value> </image_file_name> </item> ... CREATE TABLE [dbo].[products]( [sku] [nchar](15) NOT NULL, [model] [nvarchar](50) NULL, [title] [ntext] NULL, [short_dis1] [ntext] NULL, [short_dis2] [ntext] NULL, [basicSpecHeading] [ntext] NULL, [basicSpec] [ntext] NULL, [detailSpecHeading] [ntext] NULL, [detailSpec] [ntext] NULL, [image_file_name] [nchar](100) NULL INSERT INTO products (sku, short_dis2,short_dis1,title,detailSpec,basicSpec,basicSpecHeading,detailSpecHeading,model,image_file_name) SELECT X.product.query('sku').value('.', 'nchar(15)'), X.product.query('short_dis2').value('.', 'nvarchar(max)'), X.product.query('short_dis1').value('.', 'nvarchar(max)'), X.product.query('title').value('.', 'nvarchar(max)'), X.product.query('detailSpec').value('.', 'nvarchar(max)'), X.product.query('basicSpec').value('.', 'nvarchar(max)'), X.product.query('basicSpecHeading').value('.', 'nvarchar(max)'), X.product.query('detailSpecHeading').value('.', 'nvarchar(max)'), X.product.query('model').value('.', 'nvarchar(max)'), X.product.query('image_file_name').value('.', 'nvarchar(max)') FROM ( SELECT CAST(x AS XML) FROM OPENROWSET( BULK 'C:\users\me\desktop\xml_sample.xml', SINGLE_BLOB) AS T(x) ) AS T(x) CROSS APPLY x.nodes('Products/Product') AS X(product);

    Read the article

  • Best full text search for mysql?

    - by ConroyP
    We're currently running MySQL on a LAMP stack and have been looking at implementing a more thorough, full-text search on our site. We've looked at MySQL's own freetext search, but it doesn't seem to cope well with large databases, which makes it far too slow for our needs. Our main requirements are: speed returning results simple updating of index In addition to the above, our "nice to have"s are: ideally not something that requires adding a module to MySQL plays nicely with PHP (majority of our dev work done using PHP) There seems to be quite a few healthy open-source projects to add fast, reliable full-text search to MySQL, so I'm basically looking for recommendations/suggestions on what you've found to be the most useful product out there, easiest to set up, etc. So far, the list of ones we've been starting to play around with are: Sphinx, C++ based, used by craigslist, thepiratebay Lucene, Java-based Apache project, powers zeoh.com and zoomf.com Solr, Java-based offshoot of Lucene, used to power searches on Digg, CNet & AOL Channels Are there any better ones out there that we haven't come across yet? Can you recommend / suggest against any of the options we've gathered so far? Thanks for your help! Update @Cletus suggested Google's Custom Search Engine. We recently trialled this on a couple of projects, and it's an almost-perfect fit for our needs. The problem is that entries on our site are updated quite regularly, and unfortunately the speed at which entries go in/get updated in Google's index was just too slow and erratic for us to rely on, even with the addition of sitemaps and requested crawl rate changes.

    Read the article

  • java connectivity with mysql error

    - by abson
    I just started with the connectivity and tried this example. I have installed the necessary softwares. Also copied the jar file into the /ext folder.Yet the code below has the following error import java.sql.*; public class Jdbc00 { public static void main(String args[]){ try { Statement stmt; Class.forName("com.mysql.jdbc.Driver"); String url = "jdbc:mysql://localhost:3306/mysql" DriverManager.getConnection(url,"root", "root"); //Display URL and connection information System.out.println("URL: " + url); System.out.println("Connection: " + con); //Get a Statement object stmt = con.createStatement(); //Create the new database stmt.executeUpdate( "CREATE DATABASE JunkDB"); stmt.executeUpdate( "GRANT SELECT,INSERT,UPDATE,DELETE," + "CREATE,DROP " + "ON JunkDB.* TO 'auser'@'localhost' " + "IDENTIFIED BY 'drowssap';"); con.close(); }catch( Exception e ) { e.printStackTrace(); }//end catch }//end main }//end class Jdbc00 But it gave the following error D:\Java12\Explore>java Jdbc00 java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at Jdbc11.main(Jdbc00.java:11) Could anyone please guide me in correcting this?

    Read the article

  • Fluent NHibernate/SQL Server 2008 insert query problem

    - by Mark
    Hi all, I'm new to Fluent NHibernate and I'm running into a problem. I have a mapping defined as follows: public PersonMapping() { Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.FirstName).Not.Nullable().Length(50); Map(p => p.MiddleInitial).Nullable().Length(1); Map(p => p.LastName).Not.Nullable().Length(50); Map(p => p.Suffix).Nullable().Length(3); Map(p => p.SSN).Nullable().Length(11); Map(p => p.BirthDate).Nullable(); Map(p => p.CellPhone).Nullable().Length(12); Map(p => p.HomePhone).Nullable().Length(12); Map(p => p.WorkPhone).Nullable().Length(12); Map(p => p.OtherPhone).Nullable().Length(12); Map(p => p.EmailAddress).Nullable().Length(50); Map(p => p.DriversLicenseNumber).Nullable().Length(50); Component<Address>(p => p.CurrentAddress, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); Map(p => p.EyeColor).Nullable().Length(3); Map(p => p.HairColor).Nullable().Length(3); Map(p => p.Gender).Nullable().Length(1); Map(p => p.Height).Nullable(); Map(p => p.Weight).Nullable(); Map(p => p.Race).Nullable().Length(1); Map(p => p.SkinTone).Nullable().Length(3); HasMany(p => p.PriorAddresses).Cascade.All(); } public PreviousAddressMapping() { Table("PriorAddress"); Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.EndEffectiveDate).Not.Nullable(); Component<Address>(p => p.Address, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); } My test is [Test] public void can_correctly_map_Person_with_Addresses() { var myPerson = new Person("Jane", "", "Doe"); var priorAddresses = new[] { new PreviousAddress(ObjectMother.GetAddress1(), DateTime.Parse("05/13/2010")), new PreviousAddress(ObjectMother.GetAddress2(), DateTime.Parse("05/20/2010")) }; new PersistenceSpecification<Person>(Session) .CheckProperty(c => c.FirstName, myPerson.FirstName) .CheckProperty(c => c.LastName, myPerson.LastName) .CheckProperty(c => c.MiddleInitial, myPerson.MiddleInitial) .CheckList(c => c.PriorAddresses, priorAddresses) .VerifyTheMappings(); } GetAddress1() (yeah, horrible name) has Line2 == null The tables seem to be created correctly in sql server 2008, but the test fails with a SQLException "String or binary data would be truncated." When I grab the sql statement in SQL Profiler, I get exec sp_executesql N'INSERT INTO PriorAddress (Line1, Line2, City, State, Zip, EndEffectiveDate, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6)',N'@p0 nvarchar(18),@p1 nvarchar(4000),@p2 nvarchar(10),@p3 nvarchar(2),@p4 nvarchar(5),@p5 datetime,@p6 int',@p0=N'6789 Somewhere Rd.',@p1=NULL,@p2=N'Hot Coffee',@p3=N'MS',@p4=N'09876',@p5='2010-05-13 00:00:00',@p6=1001 Notice the @p1 parameter is being set to nvarchar(4000) and being passed a NULL value. Why is it setting the parameter to nvarchar(4000)? How can I fix it? Thanks!

    Read the article

  • BASH echo write mysql input

    - by jmituzas
    Have a bash menu where variables write to file for mysql input. heres what I have: echo "CREATE DATABASE '$mysqldbn'; #GRANT ALL PRIVILEGES ON *.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup' WITH GRANT OPTION; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myip' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'localhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES< LOCK TABLES on '$mysqldbn'.* TO '$mysqlu'@'$rip' IDENTIFIED BY '$mysqlup';" > nmysql.db mysql -u root -p$mypass < nmysql.db problem is to get variables to show I had to put them in single quotes, the single quotes show up as I want for instances like '$mysqlu'@'localhost'. But how can I remove the quotes and still get to use the variable in the instance like, CREATE DATABASE '$mysqldbn' ? Double quotes wont work either, I am at a loss. Thanks in advance, Joe

    Read the article

  • php oop and mysql

    - by gloris
    I need to get data, to check and send to db. Programming with PHP OOP. Could you tell me if my class structure is good and how dislpay all data?. Thanks <?php class Database{ private $DBhost = 'localhost'; private $DBuser = 'root'; private $DBpass = 'root'; private $DBname = 'blog'; public function connect(){ //Connect to mysql db } public function select($rows){ //select data from db } public function insert($rows){ //Insert data to db } public function delete($rows){ //Delete data from db } } class CheckData{ public $number1; public $number2; public function __construct(){ $this->number1 = $_POST['number1']; $this->number2 = $_POST['number2']; } function ISempty(){ if(!empty($this->$number1)){ echo "Not Empty"; $data = new Database(); $data->insert($this->$number1); } else{ echo "Empty1"; } if(!empty($this->$number2)){ echo "Not Empty"; $data = new Database(); $data->insert($this->$number2); } else{ echo "Empty2"; } } } class DisplayData{ //How print all data? function DisplayNumber(){ $data = new Database(); $data->select(); } } $check = new CheckData(); $check->ISempty(); $display = new DisplayData() $display->DisplayNumber(); ?>

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi

    MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi Mise à jour du 13/12/10 Ce mercredi, Oracle organise un webinar pour présenter « une mise à jour importante de MySQL ». Tomas Ulin, Vice-Président du développement de MySQL et Rob Young, Senior Product Manager, y dévoileront les dernières avancées du SGBD open-source que le géant des bases de données à récupérée avec le rachat de Sun. Oracle avait annoncé une RC de MySQL 5,5 lors de l'Oracle OpenWorld de septembre (lire ci-avant). Cette fois-ci, les responsables du projets pourraient annoncer sa disponibilité officielle.

    Read the article

  • MySQL port 3306 blocked in csf yet can still telnet to port 3306 from external host

    - by Neek
    We have a Centos 6 VPS that was recently migrated to a new machine within the same web hosting company. It's running WHM/cPanel and has csf/lfd installed. csf is set up with mostly vanilla config. I'm no iptables expert, csf has not let me down before. If a port isn't in the TCP_IN list, it should be blocked on the firewall by iptables. My problem is that I can telnet to port 3306 from an external host, yet I think iptables ought to be blocking 3306 because of csf's rules. We are now failing a security check because of this open port. (this output is obfuscated to protect the innocent: www.ourhost.com is the host with the firewall problem) [root@nickfenwick log]# telnet www.ourhost.com 3306 Trying 158.255.45.107... Connected to www.ourhost.com. Escape character is '^]'. HHost 'nickfenwick.com' is not allowed to connect to this MySQL serverConnection closed by foreign host. So the connection is established, and MySQL refuses the connection due to its configuration. I need the network connection to be refused at the firewall level, before it reaches MySQL. Using WHM's csf web UI I can see 'Firewall Configuration' includes a fairly sensible TCP_IN line: TCP_IN: 20,21,22,25,53,80,110,143,222,443,465,587,993,995,2077,2078,2082,2083,2086,2087,2095,2096,8080 (lets ignore that I could trim that a little for now, my concern is that 3306 is not listed in that list) When csf is restarted it logs the usual slew of output as it sets up iptables rules, for example what looks like it blocking all traffic and then allowing specific ports like SSH on 22: [cut] DROP all opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 [cut] ACCEPT tcp opt -- in !lo out * 0.0.0.0/0 -> 0.0.0.0/0 state NEW tcp dpt:22 [cut] I can see that iptables is running, service iptables status returns a long list of firewall rules. Here is my Chain INPUT section from service iptables status, hopefully that's enough to show how the firewall is configured. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 acctboth all -- 0.0.0.0/0 0.0.0.0/0 2 ACCEPT tcp -- 217.112.88.10 0.0.0.0/0 tcp dpt:53 3 ACCEPT udp -- 217.112.88.10 0.0.0.0/0 udp dpt:53 4 ACCEPT tcp -- 217.112.88.10 0.0.0.0/0 tcp spt:53 5 ACCEPT udp -- 217.112.88.10 0.0.0.0/0 udp spt:53 6 ACCEPT tcp -- 8.8.4.4 0.0.0.0/0 tcp dpt:53 7 ACCEPT udp -- 8.8.4.4 0.0.0.0/0 udp dpt:53 8 ACCEPT tcp -- 8.8.4.4 0.0.0.0/0 tcp spt:53 9 ACCEPT udp -- 8.8.4.4 0.0.0.0/0 udp spt:53 10 ACCEPT tcp -- 8.8.8.8 0.0.0.0/0 tcp dpt:53 11 ACCEPT udp -- 8.8.8.8 0.0.0.0/0 udp dpt:53 12 ACCEPT tcp -- 8.8.8.8 0.0.0.0/0 tcp spt:53 13 ACCEPT udp -- 8.8.8.8 0.0.0.0/0 udp spt:53 14 LOCALINPUT all -- 0.0.0.0/0 0.0.0.0/0 15 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 16 INVALID tcp -- 0.0.0.0/0 0.0.0.0/0 17 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 18 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:20 19 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:21 20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:53 23 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 24 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110 25 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:143 26 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:222 27 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 28 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:465 29 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:587 30 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:993 31 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:995 32 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2077 33 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2078 34 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2082 35 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2083 36 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2086 37 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2087 38 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2095 39 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2096 40 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:8080 41 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:20 42 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:21 43 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:53 44 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:222 45 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:8080 46 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 47 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 48 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 49 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 50 LOGDROPIN all -- 0.0.0.0/0 0.0.0.0/0 What's the next thing to check?

    Read the article

  • MySQL se tourne encore plus vers InnoDB, qui sera utilisé pour les tables systèmes pour bénéficier des propriétés ACID

    MySQL se tourne encore plus vers InnoDB qui sera utilisé pour les tables systèmes pour bénéficier des propriétés ACID Depuis le rachat de MySQL par Oracle, le célèbre SGBD a entamé une lente transition de son moteur de base de données, passant du moteur d'origine MyISAM qui est basé sur la méthode ISAM (Indexed Sequential Access Methode) à InnoDB. Cette transition s'est d'ailleurs achevée partiellement par l'intronisation d'InnoDB comme moteur par défaut pour les versions de MySQL 5.5 et plus.Partiellement,...

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • MySQL 5.6 : la préversion améliore la disponibilité, les performances et l'administration pour les applications Web, Cloud et embarquées

    MySQL 5.6 : la préversion améliore la disponibilité, les performances et l'administration pour les applications Web, Cloud et embarquées Mise à jour du 12/04/2012, par Hinault Romaric Une nouvelle version intermédiaire de développement (DMR) pour MySQL 5.6 vient d'être publiée par Oracle. La haute disponibilité est renforcée dans cette mouture avec de nouvelles fonctions de réplication basées sur des mécanismes d'autoréparation comme les identifiants globaux de transactions (GTID) pour le suivi de l'avancement de la réplication sur une topologie de réplication et de nouveaux utilitaires de réplication MySQL pour f...

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >