Search Results

Search found 22358 results on 895 pages for 'django raw query'.

Page 578/895 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • Manipulating both unicode and ASCII character set in C#

    - by Murlex
    I have this mapping in my C# application string [,] unicode2Ascii = { { "&#3001;", "\x86" } }; ஹ - is the unicode value for a tamil literal "ஹ". This is the raw hex literal for the unicode value saved by MS Word as a byte sequence. I am trying to map these unicode value "strings" to a hex value under 255 (so as to accommodate non-unicode supported systems). I trying to use string.replace like this: S = S.replace(unicode2Ascii[0,0], unicode2Ascii[0,1]); However the resultant ouput has a ? instead of the actual hex 0x86 stored. Any pointer on how I could set the encoding for the second element of that array to something like windows-1252? Or is there a better way to do this conversion? thanks in advance

    Read the article

  • What will be the setup process for website development?

    - by Vijay Shanker
    Hi, I want to create a simple site for my personal usage. And this only in python based technologies. So I want to get a expert oponian on this topic. What should i used as platform? I did a search for available options and found Django, grok, web2py and many more of these. Which one a novice use should use? If I choose to use only the basic python scripts then what option i have to work on? http://wiki.python.org/moin/WebBrowserProgramming. This link on python site confused me more, instead of solving my curiosity about the topic. Please give some pointer to accurate and easy to understand reading materials. I have got a idea of developing java based web applications using either spring-webmvc and struts. Can I relate Java process to python process for web development?

    Read the article

  • How can I exclude LEFT JOINed tables from TOP in SQL Server?

    - by Kalessin
    Let's say I have two tables of books and two tables of their corresponding editions. I have a query as follows: SELECT TOP 10 * FROM (SELECT hbID, hbTitle, hbPublisherID, hbPublishDate, hbedID, hbedDate FROM hardback LEFT JOIN hardbackEdition on hbID = hbedID UNION SELECT pbID, pbTitle, pbPublisher, pbPublishDate, pbedID, pbedDate FROM paperback Left JOIN paperbackEdition on pbID = pbedID ) books WHERE hbPublisherID = 7 ORDER BY hbPublishDate DESC If there are 5 editions of the first two hardback and/or paperback books, this query only returns two books. However, I want the TOP 10 to apply only to the number of actual book records returned. Is there a way I can select 10 actual books, and still get all of their associated edition records? In case it's relevant, I do not have database permissions to CREATE and DROP temporary tables. Thanks for reading! Update To clarify: The paperback table has an associated table of paperback editions. The hardback table has an associated table of hardback editions. The hardback and paperback tables are not related to each other except to the user who will (hopefully!) see them displayed together.

    Read the article

  • Use of where in multiple joins to remove rows - linq

    - by bergin
    hi, I have a table of orders. the status is on the soilorders which is joined to the orders. I only want to return orders where the joined soilorder does not have status "Removed". I had thought that join sso in db.SoilSamplingOrders on ord.order_id equals sso.order_id where sso.status.Equals("Removed")!=true but then no records are returned! thanks for any help (query below) var query = from ord in db.Orders join sso in db.SoilSamplingOrders on ord.order_id equals sso.order_id where sso.status.Equals("Removed")!=true join cust in db.Customers on ord.customer_id equals cust.customer_id select new Listing { assigned_to = sso.assigned_to, company = cust.company, order_id = ord.order_id, order_created = ord.order_created, customer_id = ord.customer_id, order_created_by_employ_id = ord.order_created_by_employ_id, first_farm_on_order = (from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id select new ListingSubJob { first_farm_on_order = f.farm }). AsEnumerable().First().first_farm_on_order, total_fields = (from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id select new { f.sssj_id }).AsEnumerable().Count(), total_area = (float?) (from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id && f.area_ha != null select f.area_ha ).Sum() ?? 0 , total_area_ph_density = (float?)(from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id && f.ph != null select f.ph).Sum() ?? 0, };

    Read the article

  • Problem with inheritance and List<>

    - by Jagd
    I have an abstract class called Grouping. I have a subclass called GroupingNNA. public class GroupingNNA : Grouping { // blah blah blah } I have a List that contains items of type GroupingNNA, but is actually declared to contain items of type Grouping. List<Grouping> lstGroupings = new List<Grouping>(); lstGroupings.Add( new GroupingNNA { fName = "Joe" }); lstGroupings.Add( new GroupingNNA { fName = "Jane" }); The Problem: The following LINQ query blows up on me because of the fact that lstGroupings is declared as List< Grouping and fName is a property of GroupingNNA, not Grouping. var results = from g in lstGroupings where r.fName == "Jane" select r; Oh, and this is a compiler error, not a runtime error. Thanks in advance for any help on this one! More Info: Here is the actual method that won't compile. The OfType() fixed the LINQ query, but the compiler doesn't like the fact that I'm trying to return the anonymous type as a List< Grouping. private List<Grouping> ApplyFilterSens(List<Grouping> lstGroupings, string fSens) { // This works now! Thanks @Lasse var filtered = from r in lstGroupings.OfType<GroupingNNA>() where r.QASensitivity == fSens select r; if (filtered != null) { **// Compiler doesn't like this now** return filtered.ToList<Grouping>(); } else return new List<Grouping>(); }

    Read the article

  • Building a document server

    - by Ben
    Hey, I'm just looking for some rough guidance here on the feasibility of a project. I have say a few thousand text documents and I want to create a web based system to serve them up to users in a OS X application. It's just a tiny aside for my family's small business, so it doesn't need to be amazing at the moment, we just want to see if we can can anyone to use it. I can't believe it can be that hard to do this? -Some sort of SQL backend database to manage permissions for users? -Download through the application? -For the moment just want to view the raw text files. -If we want people to pay, how would we go about incorporating that? Sorry for the vague outline. I'm still not 100% sure on what we want. Thanks for any help.

    Read the article

  • Using Linq-To-SQL I'm getting some weird behavior doing text searches with the .Contains method. Loo

    - by Nate Bross
    I have a table, where I need to do a case insensitive search on a text field. If I run this query in LinqPad directly on my database, it works as expected Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase")) // also, adding in the same constraints I'm using in my repository works in LinqPad // Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase") && tbl.IsActive == true) In my application, I've got a repository which exposes IQueryable objects which does some initial filtering and it looks like this var dc = new MyDataContext(); public IQueryable<Table> GetAllTables() { var ret = dc.Tables.Where(t => t.IsActive == true); return ret; } In the controller (its an MVC app) I use code like this in an attempt to mimic the LinqPad query: var rpo = new RepositoryOfTable(); var tables = rpo.GetAllTables(); // for some reason, this does a CASE SENSITIVE search which is NOT what I want. tables = tables.Where(tbl => tbl.Title.Contains("StringWithAnyCase"); return View(tables); The column is defiend as an nvarchar(50) in SQL Server 2008. Any help or guidance is greatly appreciated!

    Read the article

  • How do I save the edits for a checkbox that implements an email notification?

    - by Ralph The Mouf
    On my site, a user gets email notifications when someone comments on their profile, or comments on their blog etc...I have made a email settings page that has checkboxes to allow the user to decide to receive emails or not. This is what I am wrapping around the email notification code chunck for the pages that have the php mail: <?php if(isset($_POST['email_toggle']) && $_POST['email_toggle'] == 'true') { if(isset($_POST['commentProfileSubmit']) && $auth) { $query etc $to = etc } } My question is what do I put on the email settings script that has the actual check boxes to make them stay checked or unchecked once you submit your settings? Another words what do I put in the if(isset portion to implement the changes? if(isset($_POST['email_toggle']) && $_POST['email_toggle'] == 'true') { /* what do I put here? */ header("Location: Profile.php?id=" . $auth->id); mysql_query($query,$connection); /* input/check boxes and submit button */ <tr> <td class="email_check"> <input type="checkbox" name="email_toggle" value="true" checked="checked" /> Receive email Notifications When Someone Answers A Question You've Answered </td> </tr> <tr> <td> <input style="margin:10px 0px 0px 10px;" class="submit" type="submit" name="email_toggle" value="Save Settings" /> </td> </tr> }

    Read the article

  • Treeview Slow in IE?!?!

    - by Mike
    I have a treeview with around 200 records that needs to be fully expanded at all times (so no loading on demand). It is inside of an update panel with the updatemode set to conditional. There are other update panels on the page as well that are set to conditional. Depending on user actions the tree may need to be rebuilt by calling databind and updating the updatepanel. Everything works fine in firefox, longest postback about 2 seconds. With IE I have to wait up to 30 seconds sometimes and the action may have nothing to do with the tree just changing a dropdown in its own updatepanel takes forever. I have considered the size of viewstate and just raw HTML generated may be causing the delay but wouldn't that effect both browsers? Anyone have anyideas what is making it so slow in IE??? Thanks!

    Read the article

  • PHP Dropdown menu [closed]

    - by rShetty
    <br><h2>Select a Tag</h2></br> <?php $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("portal", $con); $query = "SELECT tag_name FROM tags"; $result = mysql_query($query); ?> <select name="tag_name" id="abc"> <option size=30 selected>Select</option> <?php while($array = mysql_fetch_assoc($result)){ ?> <option value ="<?php echo $array['tag_name'];?>"><?php echo $array['tag_name'];?> </option> <?php } ?> </select> <br><br> This is a snippet of code for getting the dropdown menu in the page. I have a database named portal and table named tags with tag_name as the attribute. Do help me to find the error in the program. I am not getting the tag_names in the dropdown menu

    Read the article

  • Calculate distances and sort them

    - by Emir
    Hi guys, I wrote a function that can calculate the distance between two addresses using the Google Maps API. The addresses are obtained from the database. What I want to do is calculate the distance using the function I wrote and sort the places according to the distance. Just like "Locate Store Near You" feature in online stores. I'm going to specify what I want to do with an example: So, lets say we have 10 addresses in database. And we have a variable $currentlocation. And I have a function called calcdist(), so that I can calculate the distances between 10 addresses and $currentlocation, and sort them. Here is how I do it: $query = mysql_query("SELECT name, address FROM table"); while ($write = mysql_fetch_array($query)) { $distance = array(calcdist($currentlocation, $write["address"])); sort($distance); for ($i=0; $i<1; $i++) { echo "<tr><td><strong>".$distance[$i]." kms</strong></td><td>".$write['name']."</td></tr>"; } } But this doesn't work very well. It doesn't sort the numbers. Another challenge: How can I do this in an efficient way? Imagine there are infinite numbers of addresses; how can I sort these addresses and page them?

    Read the article

  • Mongodb update: how to check if an update succeeds or fails?

    - by zmg
    I think the title pretty much says it all. I'm working with Mongodb in PHP using the pecl driver. My updates are working great, but I'd like to build some error checking into my funciton(s). I've tried using lastError() in a pretty simple function: function system_db_update_object($query, $values, $database, $collection) { $connection = new Mongo(); $collection = $connection->$database->$collection; $connection->$database->resetError(); //Added for debugging $collection->update( $query, array('$set' => $values)); //$errorArray = $connection->$database->lastError(); var_dump($connection->$database->lastError());exit; // Var dump and /Exit/ } But pretty much regardless of what I try to update (whether it exists or not) I get these same basic results: array(4) { ["err"]=> NULL ["updatedExisting"]=> bool(true) ["n"]=> float(1) ["ok"]=> float(1) } Any help or direction would be greatly appreciated.

    Read the article

  • Recommended approach for error handling with PHP and MYSQL

    - by iama
    I am trying to capture database (MYSQL) errors in my PHP web application. Currently, I see that there are functions like mysqli_error(), mysqli_errno() for capturing the last occurred error. However, this still requires me to check for error occurrence using repeated if/else statements in my php code. You may check my code below to see what I mean. Is there a better approach to doing this? (or) Should I write my own code to raise exceptions and catch them in one single place? What is the recommended approach? Also, does PDO raise exceptions? Thanks. function db_userexists($name, $pwd, &$dbErr) { $bUserExists = false; $uid = 0; $dbErr = ''; $db = new mysqli(SERVER, USER, PASSWORD, DB); if (!mysqli_connect_errno()) { $query = "select uid from user where uname = ? and pwd = ?"; $stmt = $db->prepare($query); if ($stmt) { if ($stmt->bind_param("ss", $name, $pwd)) { if ($stmt->bind_result($uid)) { if ($stmt->execute()) { if ($stmt->fetch()) { if ($uid) $bUserExists = true; } } } } if (!$bUserExists) $dbErr = $db->error(); $stmt->close(); } if (!$bUserExists) $dbErr = $db->error(); $db->close(); } else { $dbErr = mysqli_connect_error(); } return $bUserExists; }

    Read the article

  • How to prevent CAST errors on SSIS ?

    - by manitra
    Hello, The question Is it possible to ask SSIS to cast a value and return NULL in case the cast is not allowed instead of throwing an error ? My environment I'm using Visual Studio 2005 and Sql Server 2005 on Windows Server 2003. The general context Just in case you're curious, here is my use case. I have to store data coming from somewhere in a generic table (key/value structure with history) witch contains some sort of value that can be strings, numbers or dates. The structure is something like this : table Values { Id int, Date datetime, -- for history Key nvarchar(50) not null, Value nvarchar(50), DateValue datetime, NumberValue numeric(19,9) } I want to put the raw value in the Value column and try to put the same value in the DateValue column when i'm able to cast it to Datetime in the NumberValue column when i'm able to cast it to a number Those two typed columns would make all sort of aggregation and manipulation much easier and faster later. That's it, now you know why i'm asking this strange question. ============ Thanks in advance for your help.

    Read the article

  • Go, AppEngine: How to structure templates for application

    - by laslowh
    How are people handling the use of templates in their Go-based AppEngine applications? Specifically, I'm looking for a project structure that affords the following: Hierarchical (directory) structure of templates and partial templates Allow me to use HTML tools/editors on my templates (embedding template text in xxx.go files makes this difficult) Automatic reload of template text when on dev server Potential stumbling blocks are: template.ParseGlob() will not traverse recursively. For performance reasons it has been recommended not to upload your templates as raw text files (because those text files reside on different servers than executing code). Please note that I am not looking for a tutorial/examples of the use of the template package. This is more of an app structure question. That being said, if you have code that solves the above problems, I would love to see it. Thanks in advance.

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • Session does not giving right records?

    - by Jugal
    I want to keep one session, but when I rollback transaction then transaction gets isActive=false, so I can not commit and rollback in next statements by using same transaction. then I need to create new transaction but what is going wrong here ? var session = NHibernateHelper.OpenSession();/* It returns new session. */ var transaction1 = session.BeginTransaction(); var list1 = session.Query<Make>().ToList(); /* It returs 4 records. */ session.Delete(list1[2]); /* After Rollback, transaction is isActive=false so I can not commit * and rollback from this transaction in future. so I need to create new transaction. */ transaction1.Rollback(); var transaction2 = session.BeginTransaction(); /* It returns 3 records. * I am not getting object(which was deleted but after that rollback) here why ? */ var list2 = session.Query<Make>().ToList(); Anyone have idea what is going wrong here ? I am not getting deleted object which was rollback.

    Read the article

  • How do I differentiate between different descendents with the same name?

    - by zotty
    I've got some XML I'm trying to import with c#, which looks something like this: <run> <name = "bob"/> <date = "1958"/> </run> <run> <name = "alice"/> <date = "1969"/> </run> I load my xml using XElement xDoc=XElement.Load(filename); What I want to do is have a class for "run", under which I can store names and dates: public class RunDetails { public RunDetails(XElement xDoc, XNamespace xmlns) { var query = from c in xDoc.Descendants(xmlns + "run").Descendants(xmlns + "name") select c; int i=0; foreach (XElement a in query) { this.name= new NameStr(a, xmlns); // a class for names Name.Add(this.name); //Name is a List<NameStr> i++; } // Here, i=2, but what I want is a new instance of the RunDetails class for each <run> } } How can I set up my code to create a new instance of the RunDetails class for every < run, and to only select the < name and < date inside a given < run?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • Optimization Techniques in Python

    - by fear-matrix
    Recently i have developed a billing application for my company with Python/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. So do you guys know any performance optimization techniques in python that will really help me with the scalability issue

    Read the article

  • What's the best way to access a MS Access database using PHP?

    - by Jack Roscoe
    Hi, I need to access some data from an MS Access database and retrieve some data from it using PHP. I've looked around the web, and found the following line which seems to correctly connect to the database: $conn->Open("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=C:\wamp\www\data\MYDB.mdb"); However, I have tried to retrieve some data in the following way: $query = "SELECT pageid FROM pages_table"; $result = mysqli_query($conn, $query); $amount_of_pages = 0; if(mysqli_num_rows($result) <= 0) echo "No results found."; else while($row = mysqli_fetch_array($result, MYSQL_ASSOC)) $amount_of_pages++; And was presented with the following errors: Warning: mysqli_query() expects parameter 1 to be mysqli, object given in C:\wamp\www\data\index.php on line 19 Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, null given in C:\wamp\www\data\index.php on line 23 No results found. I don't really understand the connection to the Access database, is there something I should be doing differently? Thanks in advance for any help.

    Read the article

  • Spring & Hibernate SessionFactory - recovery from a down server

    - by MJB
    So pre spring, we used version of HibernateUtil that cached the SessionFactory instance if a successful raw JDBC connection was made, and threw SQLException otherwise. This allowed us to recover from initial setup of the SessionFactory being "bad" due to authentication or server connection issues. We moved to Spring and wired things in a more or less classic way with the LocalSessionFactoryBean, the C3P0 datasource, and various dao classes which have the SessionFactory injected. Now, if the SQL server appears to not be up when the web app runs, the web app never recovers. All access to the dao methods blow up because a null sessionfactory gets injected. (once the sessionfactory is made properly, the connection pool mostly handles the up/down status of the sql server fine, so recovery is possible) Now, the dao methods are wired by default to be singletons, and we could change them to prototype. I don't think that will fix the matter though - I believe the LocalSessionFactoryBean is now "stuck" and caches the null reference (I haven't tested this yet, though, I'll shamefully admit). This has to be an issue that concerns people.

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >