Search Results

Search found 4788 results on 192 pages for 'adhoc queries'.

Page 163/192 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Understanding many to many relationships and Entity Framework

    - by Anders Svensson
    I'm trying to understand the Entity Framework, and I have a table "Users" and a table "Pages". These are related in a many-to-many relationship with a junction table "UserPages". First of all I'd like to know if I'm designing this relationship correctly using many-to-many: One user can visit multiple pages, and each page can be visited by multiple users..., so am I right in using many2many? Secondly, and more importantly, as I have understood m2m relationships, the User and Page tables should not repeat information. I.e. there should be only one record for each user and each page. But then in the entity framework, how am I able to add new visits to the same page for the same user? That is, I was thinking I could simply use the Count() method on the IEnumerable returned by a LINQ query to get the number of times a user has visited a certain page. But I see no way of doing that. In Linq to Sql I could access the junction table and add records there to reflect added visits to a certain page by a certain user, as many times as necessary. But in the EF I can't access the junction table. I can only go from User to a Pages collection and vice versa. I'm sure I'm misunderstanding relationships or something, but I just can't figure out how to model this. I could always have a Count column in the Page table, but as far as I have understood you're not supposed to design database tables like that, those values should be collected by queries... Please help me understand what I'm doing wrong...

    Read the article

  • memory leak in php script

    - by Jasper De Bruijn
    Hi, I have a php script that runs a mysql query, then loops the result, and in that loop also runs several queries: $sqlstr = "SELECT * FROM user_pred WHERE uprType != 2 AND uprTurn=$turn ORDER BY uprUserTeamIdFK"; $utmres = mysql_query($sqlstr) or trigger_error($termerror = __FILE__." - ".__LINE__.": ".mysql_error()); while($utmrow = mysql_fetch_array($utmres, MYSQL_ASSOC)) { // some stuff happens here // echo memory_get_usage() . " - 1241<br/>\n"; $sqlstr = "UPDATE user_roundscores SET ursUpdDate=NOW(),ursScore=$score WHERE ursUserTeamIdFK=$userteamid"; if(!mysql_query($sqlstr)) { $err_crit++; $cLog->WriteLogFile("Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid\n"); echo "Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid<br>\n"; break; } unset($sqlstr); // echo memory_get_usage() . " - 1253<br/>\n"; // some stuff happens here too } The update query never fails. For some reason, between the two calls of memory_get_usage, there is some memory added. Because the big loop runs about 500.000 or more times, in the end it really adds up to alot of memory. Is there anything I'm missing here? could it herhaps be that the memory is not actually added between the two calls, but at another point in the script? Edit: some extra info: Before the loop it's at about 5mb, after the loop about 440mb, and every update query adds about 250 bytes. (the rest of the memory gets added at other places in the loop). The reason I didn't post more of the "other stuff" is because its about 300 lines of code. I posted this part because it looks to be where the most memory is added.

    Read the article

  • How do I get PHP variables from this MySQL query?

    - by CT
    I am working on an Asset Database problem using PHP / MySQL. In this script I would like to search my assets by an asset id and have it return all related fields. First I query the database asset table and find the asset's type. Then depending on the type I run 1 of 3 queries. <?php //make database connect mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); //get type of asset $type = mysql_query(" SELECT asset.type From asset WHERE asset.id = 93120 ") or die(mysql_error()); switch ($type){ case "Server": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serial_number ,server.esc ,server.user ,server.prev_user ,server.warranty FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = 93120 "); break; case "Laptop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,laptop.manufacturer ,laptop.model ,laptop.serial_number ,laptop.esc ,laptop.user ,laptop.prev_user ,laptop.warranty FROM asset LEFT JOIN laptop ON laptop.id = asset.id WHERE asset.id = 93120 "); break; case "Desktop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,desktop.manufacturer ,desktop.model ,desktop.serial_number ,desktop.esc ,desktop.user ,desktop.prev_user ,desktop.warranty FROM asset LEFT JOIN desktop ON desktop.id = asset.id WHERE asset.id = 93120 "); break; } ?> So far I am able to get asset.type into $type. How would I go about getting the rest of the variables (laptop.model to $model, asset.notes to $notes and so on)? Thank you.

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • Understanding MongoDB (and NoSQL in general) and how to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have three collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is, what if the author's name changed? You'd have to go through updating a ton of posts and comments. But is this an okay thing to do with NoSQL?

    Read the article

  • How do I construct a more complex single LINQ to XML query?

    - by Cyberherbalist
    I'm a LINQ newbie, so the following might turn out to be very simple and obvious once it's answered, but I have to admit that the question is kicking my arse. Given this XML: <measuresystems> <measuresystem name="SI" attitude="proud"> <dimension name="mass" dim="M" degree="1"> <unit name="kilogram" symbol="kg"> <factor name="hundredweight" foreignsystem="US" value="45.359237" /> <factor name="hundredweight" foreignsystem="Imperial" value="50.80234544" /> </unit> </dimension> </measuresystem> </measuresystems> I can query for the value of the conversion factor between kilogram and US hundredweight using the following LINQ to XML, but surely there is a way to condense the four successive queries into a single complex query? XElement mss = XElement.Load(fileName); IEnumerable<XElement> ms = from el in mss.Elements("measuresystem") where (string)el.Attribute("name") == "SI" select el; IEnumerable<XElement> dim = from e2 in ms.Elements("dimension") where (string)e2.Attribute("name") == "mass" select e2; IEnumerable<XElement> unit = from e3 in dim.Elements("unit") where (string)e3.Attribute("name") == "kilogram" select e3; IEnumerable<XElement> factor = from e4 in unit.Elements("factor") where (string)e4.Attribute("name") == "pound" && (string)e4.Attribute("foreignsystem") == "US" select e4; foreach (XElement ex in factor) { Console.WriteLine ((string)ex.Attribute("value")); }

    Read the article

  • Erlang ODBC parameter query with null parameters

    - by Schlomer
    Is it possible to pass null values to parameter queries? For example Sql = "insert into TableX values (?,?)". Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}]. % Val2 may be a float, or it may be the atom, undefined odbc:param_query(OdbcRef, Sql, Params). Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as: Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)". with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there. Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it. Having a parameterized query for each permutation is simply not an option. Any thoughts or ideas would be fantastic! Thank you!

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • SQL using sum to count results of multiple subqueries

    - by asdas
    I have a table with 2 columns: integer and var char. I am given only the integer values but need to do work on the var char (string) values. Given an integer, and a list of other integers (no overlap), I want to find the string for that single integer. Then I want to take that string and do the INSTR command with that string, and all the other strings for all the other integers. Then I want the sum of all the INSTR so the result is one number. So lets say I have int x, and list y=[y0, y1, y2]. I want to do 3 INSTR commands like SUM(INSTR(string for x, string for y0), INSTR(string for x, string for y1), INSTR(string for x, string for y2)) I think im going in the wrong direction, this is what I have. Im not good with sub queries. SELECT SUM ( SELECT INSTR ( SELECT string FROM pages WHERE int=? LIMIT 1, ( SELECT string FROM pages WHERE id=? OR id=? OR id=? LIMIT 3 ) ) )

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Creating has_many :through records 2x times

    - by antiarchitect
    I have models class Question < ActiveRecord::Base WEIGHTS = %w(medium hard easy) belongs_to :test has_many :answers, :dependent => :destroy has_many :testing_questions end class Testing < ActiveRecord::Base belongs_to :student, :foreign_key => 'user_id' belongs_to :subtest has_many :testing_questions, :dependent => :destroy has_many :questions, :through => :testing_questions end So when I try to bind questions to testing on it's creation: >> questions = Question.all ... >> questions.count => 3 >> testing = Testing.create(:user_id => 3, :subtest_id => 1, :questions => questions) Testing Columns (0.9ms) SHOW FIELDS FROM `testings` SQL (0.1ms) BEGIN SQL (0.1ms) COMMIT SQL (0.1ms) BEGIN Testing Create (0.3ms) INSERT INTO `testings` (`created_at`, `updated_at`, `user_id`, `subtest_id`) VALUES('2010-05-18 00:53:05', '2010-05-18 00:53:05', 3, 1) TestingQuestion Columns (0.9ms) SHOW FIELDS FROM `testing_questions` TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(1, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.4ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(2, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(3, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(1, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(2, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(3, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) SQL (90.2ms) COMMIT => #<Testing id: 31, subtest_id: 1, user_id: 3, created_at: "2010-05-18 00:53:05", updated_at: "2010-05-18 00:53:05"> There is 6 SQL queries and 6 records in testing_questions are created. Why?

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • How do I retrieve an automated report and save it to a database?

    - by Mason Wheeler
    I've got a web server that will take scripts in Python, PHP or Perl. I don't know much about any of those languages, but of the three, Python seems the least scary. It has a MySql database set up, and I know enough SQL to manage it and write queries for it. I also have a program that I want to add automated error reporting to. Something goes wrong, it sends a bug report to my server. What I don't know how to do is write a Python script that will sit on the web server and, when my program sends in a bug report, do the following: Receive the bug report. Parse it out into sections. Insert it into the database. Have the server send me an email. From what little I understand, this seems like it shouldn't be too difficult if I only knew what I was doing. Could someone point me to a site that explains the basic principles I'd need to create a script like this?

    Read the article

  • mysqldb interfaceError

    - by Johanna
    I have a very weird probleme with mysqldb (mysql module for python). I have a file with queries for inserting records in tables. If I call the functions from the file, it works just fine. But when I try to call one of the functions from another file it throws me a _mysql_exception.InterfaceError: (0, '') I really don't get what I'm doing wrong here.. I call the function from buildDB.py : import create create.newFormat("HD", 0,0,0) The function newFormat(..) is in create.py (imported) : from Database import Database db = Database() def newFormat(name, width=0, height=0, fps=0): format_query = "INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('"+name+"',"+str(width)+","+str(height)+","+str(fps)+");" db.execute(format_query) And the class Databse is the following : import MySQLdb from MySQLdb.constants import FIELD_TYPE class Database(): def __init__(self): server = "localhost" login = "seq" password = "seqmanager" database = "Sequence" my_conv = { FIELD_TYPE.LONG: int } self.conn = MySQLdb.connection(host=server, user=login, passwd=password, db=database, conv=my_conv) # self.cursor = self.conn.cursor() def close(self): self.conn.close() def execute(self, query): self.conn.query(query) (I put only relevant code) Traceback : Z:\sequenceManager\mysql>python buildDB.py D:\ProgramFiles\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWa rning: the sets module is deprecated from sets import ImmutableSet INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('HD',0 ,0,0); Traceback (most recent call last): File "buildDB.py", line 182, in <module> create.newFormat("HD") File "Z:\sequenceManager\mysql\create.py", line 52, in newFormat db.execute(format_query) File "Z:\sequenceManager\mysql\Database.py", line 19, in execute self.conn.query(query) _mysql_exceptions.InterfaceError: (0, '') The warning has never been a problem before so I don't think it's related.

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • Optimize website for touch devices

    - by gregers
    On a touch device like iPhone/iPad/Android it can be difficult to hit a small button with your finger. There is no cross-browser way to detect touch devices with CSS media queries that I know of. So I check if the browser has support for javascript touch events. Until now, other browsers haven't supported them, but the latest Google Chrome on dev channel enabled touch events (even for non touch devices). And I suspect other browser makers will follow, since laptops with touch screens are comming. This is the test I use: function isTouchDevice() { try { document.createEvent("TouchEvent"); return true; } catch (e) { return false; } } The problem is that this only tests if the browser has support for touch events, not the device. Does anyone know of The Correct[tm] way of giving touch devices better user experience? Other than sniffing user agent. Mozilla has a media query for touch devices. But I haven't seen anything like it in any other browser: https://developer.mozilla.org/En/CSS/Media_queries#-moz-touch-enabled Update: I want to avoid using a separate page/site for mobile/touch devices. The solution has to detect touch devices with object detection or similar from JavaScript, or include a custom touch-CSS without user agent sniffing! The main reason I asked, was to make sure it's not possible today, before I contact the css3 working group.

    Read the article

  • Deeply nested subqueries for traversing trees in MySQL

    - by nickf
    I have a table in my database where I store a tree structure using the hybrid Nested Set (MPTT) model (the one which has lft and rght values) and the Adjacency List model (storing parent_id on each node). my_table (id, parent_id, lft, rght, alias) This question doesn't relate to any of the MPTT aspects of the tree but I thought I'd leave it in in case anyone had a good idea about how to leverage that. I want to convert a path of aliases to a specific node. For example: "users.admins.nickf" would find the node with alias "nickf" which is a child of one with alias "admins" which is a child of "users" which is at the root. There is a unique index on (parent_id, alias). I started out by writing the function so it would split the path to its parts, then query the database one by one: SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users';-- 1 SELECT `id` FROM `my_table` WHERE `parent_id` = 1 AND `alias` = 'admins'; -- 8 SELECT `id` FROM `my_table` WHERE `parent_id` = 8 AND `alias` = 'nickf'; -- 37 But then I realised I could do it with a single query, using a variable amount of nesting: SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users' ) AND `alias` = 'admins' ) AND `alias` = 'nickf'; Since the number of sub-queries is dependent on the number of steps in the path, am I going to run into issues with having too many subqueries? (If there even is such a thing) Are there any better/smarter ways to perform this query?

    Read the article

  • Nhibernate: Stop it from joining to a table that is not needed

    - by Aaron
    I have two tables (tbArea, tbPost) that relate to the following classes. class Area { int ID string Name ... } class Post { int ID string Title Area Area ... } These two classes map up with Fluent Nhibernate. Below is the post mapping. public class PostMapping : ClassMap<Post> { public PostMapping() { Cache.NonStrictReadWrite(); this.Table("tbPost"); Id(x => x.ID) .Column("PostID") .GeneratedBy .Identity(); References(x => x.Area) .ForeignKey("AreaID") .Column("AreaID"); ... } } Any time I perform a query on the Post table "where AreaID = 1(any AreaId)", nhibernate will join to the area table. (What Nhibernate generates for a query) SELECT post fields , area fields (automatically added) FROM tbPost p LEFT JOIN tbArea a on p.areaid = a.areaid where p.areaid = 1 I have tried setting Area to LazyLoad, to Fetch.Select, ReadOnly, and any other setting on the reference and still it will always join to Area. I am trying to optimize the backend database queries, and since I don't need the area object loaded just filtered I would like to eliminate the unnecessary join to Area each time I Query post. What configurations do I need to change or mappings to get area to still be related to post in my objects, but not query it when I filter on AreaID?

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • Raising events and object persistence in Django

    - by Mridang Agarwalla
    Hi, I have a tricky Django problem which didn't occur to me when I was developing it. My Django application allows a user to sign up and store his login credentials for a sites. The Django application basically allows the user to search this other site (by scraping content off it) and returns the result to the user. For each query, it does a couple of queries of the other site. This seemed to work fine but sometimes, the other site slaps me with a CAPTCHA. I've written the code to get the CAPTCHA image and I need to return this to the user so he can type it in but I don't know how. My search request (the query, the username and the password) in my Django application gets passed to a view which in turn calls the backend that does the scraping/search. When a CAPTCHA is detected, I'd like to raise a client side event or something on those lines and display the CAPTCHA to the user and wait for the user's input so that I can resume my search. I would somehow need to persist my backend object between calls. I've tried pickling it but it doesn't work because I get the Can't pickle 'lock' object error. I don't know to implement this though. Any help/ideas? Thanks a ton.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >