Search Results

Search found 19966 results on 799 pages for 'datetime query'.

Page 336/799 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • C# Byte[] to Url Friendly String

    - by LorenVS
    Hello, I'm working on a quick captcha generator for a simple site I'm putting together, and I'm hoping to pass an encrypted key in the url of the page. I could probably do this as a query string parameter easy enough, but I'm hoping not too (just because nothing else runs off the query string)... My encryption code produces a byte[], which is then transformed using Convert.ToBase64String(byte[]) into a string. This string, however, is still not quite url friendly, as it can contain things like '/' and '='. Does anyone know of a better function in the .NET framework to convert a byte array to a url friendly string? I know all about System.Web.HttpUtility.UrlEncode() and its equivalents, however, they only work properly with query string parameters. If I url encode an '=' inside of the path, my web server brings back a 400 Bad Request error. Anyways, not a critical issue, but hoping someone can give me a nice solution **EDIT: Just to be absolutely sure exactly what I'm doing with the string, I figured I would supply a little more information. The byte[] that results from my encryption algorithm should be fed through some sort of algorithm to make it into a url friendly string. After this, it becomes the content of an XElement, which is then used as the source document for an XSLT transformation, and is used as a part of the href attribute for an anchor. I don't believe the xslt transformation is causing the issues, since what is coming through on the path appears to be an encoded query string parameter, but causes the HTTP 400 I've also tried HttpUtility.UrlPathEncode() on a base64 string, but that doesn't seem to do the trick either (I still end up with '/'s in my url)**

    Read the article

  • mysql alter to table

    - by user485783
    Hi, I drop the mysql alter code below to database via phpmyadmin one by one, it it work fine, is there anyone could help me how to drop it all together at once? or do you know the the samples of php code that may execute it? just let me know please. thanks in advace ALTER TABLE user ADD title varchar(16) COLLATE utf8_bin NOT NULL DEFAULT '' AFTER user_id ALTER TABLE customer ADD title varchar(16) COLLATE utf8_bin NOT NULL DEFAULT '' AFTER customer_id ALTER TABLE customer ADD date_birtdate datetime NOT NULL DEFAULT '0000-00-00 00:00:00' AFTER lastname ALTER TABLE customer ADD security_question varchar(96) COLLATE utf8_bin NOT NULL DEFAULT '' AFTER fax ALTER TABLE customer ADD security_answer varchar(96) COLLATE utf8_bin NOT NULL DEFAULT '' AFTER fax ALTER TABLE customer ADD pin_number text COLLATE utf8_bin AFTER password ALTER TABLE customer ADD notes text COLLATE utf8_bin AFTER bank_number ALTER TABLE customer ADD last_active datetime NOT NULL DEFAULT '0000-00-00 00:00:00' AFTER date_added

    Read the article

  • how to format date when i load data from google-app-engine..

    - by zjm1126
    i use remote_api to load data from google-app-engine. appcfg.py download_data --config_file=helloworld/GreetingLoad.py --filename=a.csv --kind=Greeting helloworld the setting is: class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', str, None), ]) exporters = [AlbumExporter] and i download a.csv is : the date is not readable , and the date in appspot.com admin is : so how to get the full date ?? thanks i change this : class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date(), None), ]) exporters = [AlbumExporter] but the error is :

    Read the article

  • SQL Selects on subsets

    - by Adam
    I need to check if a row exists in a database; however, I am trying to find the way to do this that offers the best performance. This is best summarised with an example. Let's assume I have the following table: dbo.Person( FirstName varchar(50), LastName varchar(50), Company varchar(50) ) Assume this table has millions of rows, however ONLY the column Company has an index. I want to find out if a particular combination of FirstName, LastName and Company exists. I know I can do this: IF EXISTS(select 1 from dbo.Person where FirstName = @FirstName and LastName = @LastName and Company = @Company) Begin .... End However, unless I'm mistaken, that will do a full table scan. What I'd really like it to do is a query where it utilises the index. With the table above, I know that the following query will have great performance, since it uses the index: Select * from dbo.Person where Company = @Company Is there anyway to make the search only on that subset of data? e.g. something like this: select * from ( Select * from dbo.Person where Company = @Company ) where FirstName = @FirstName and LastName = @LastName That way, it would only be doing a table scan on a much narrower collection of data. I know the query above won't work, but is there a query that would? Oh, and I am unable to create temporary tables, as the user will only have read access.

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Problem with migrating a model in ruby

    - by Shreyas Satish
    I run script/generate model query edit query.rb in models.. class Query < ActiveRecord::Base #I even tried Migrations instead of Base def sef.up create table :queries do|t| t.string :name end end def self.down drop_table :queries end end ,run rake db:migrate. and what I see in db is this: mysql> desc queries; +------------+----------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | created_at | datetime | YES | | NULL | | | updated_at | datetime | YES | | NULL | | +------------+----------+------+-----+---------+----------------+ Where is the "name" field? HELP ! Cheers !

    Read the article

  • sqlite issues on iphone

    - by prajakta
    i can execute query but i can see the only last value of list_ID i want to access 4 but i am getting 6 ??? list_ID is ----------------------->is 4 2010-12-24 12:57:07.507 DatabaseTest[3398:207] QUERY EXECUTION 2010-12-24 12:57:07.508 DatabaseTest[3398:207] bxbxbxmnb 2010-12-24 12:57:07.508 DatabaseTest[3398:207] list_ID is ----------------------->is 5 2010-12-24 12:57:07.509 DatabaseTest[3398:207] QUERY EXECUTION 2010-12-24 12:57:07.510 DatabaseTest[3398:207] bxbxbxmnb 2010-12-24 12:57:07.511 DatabaseTest[3398:207] list_ID is ----------------------->is 6 this is the code if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { acat=[[[Cat_tableList alloc]init]autorelease]; ///////////// statement//////// NSLog(@" QUERY EXECUTION"); // sStudent.cat_id=[NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; acat.list_id=[NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; acat.cat_id=[NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; acat.names=[NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; acat.content=[NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 3)]; [appDelegate.catLists addObject:acat]; (@"final AUTo_ID array is ............%@",acat.content); NSLog(@" list_ID is ----------------------->is %@",acat.list_id); i just want value to b 4 so that i can use this somewhere /////////////// }

    Read the article

  • MySQL INJECTION Solution...

    - by Val
    I have been bothered for so long by the MySQL injections and was thinking of a way to eliminate this problem all together. I have came up with something below hope that many people will find this useful. The only Draw back I can think of this is the partial search: Jo =returns "John" by using the like %% statement. Here is a php solution: <?php function safeQ(){ $search= array('delete','select');//and every keyword... $replace= array(base64_encode('delete'),base64_encode('select')); foreach($_REQUEST as $k=>$v){ str_replace($search, $replace, $v); } } foo(); function html($str){ $search= array(base64_encode('delete'),base64_encode('select')); $replace= array('delete','select');//and every keyword... str_replace($search, $replace, $str); } //example 1 ... ... $result = mysql_fetch_array($query); echo html($result[0]['field_name']); //example 2 $select = 'SELECT * FROM safeQ($_GET['query']) '; //example 3 $insert = 'INSERT INTO .... value(safeQ($_GET['query']))'; ?> I know, I know that you still could inject using 1=1 or any other type of injections... but this I think could solve half of your problem so the right mysql query is executed. So my question is if anyone can find any draw backs on this then please feel free to comment here. PLEASE GIVE AN ANSWER only if you think that this is a very useful solution and no major drawbacks are found OR you think is a bad idea all together...

    Read the article

  • How To Get A Field Value Based On The Max Of Another Field In VFP v8.0

    - by DaveB
    So, I have a table and I want to get the value from one field in the record with the greatest DateTime() value in another field and where still another field is equal to a certain value. Example data: Balance Created MeterNumber 7924.252 02/02/2010 10:31:48 AM 2743800 7924.243 02/02/2010 11:01:37 AM 2743876 7924.227 02/02/2010 03:55:50 PM 2743876 I want to get the balance for a record with the greatest created datetime for a specific meter number. In VFP 7 I can use: SELECT a.balance ,MAX(a.created) FROM MyTable a WHERE a.meternumber = '2743876' But, in the VFP v8.0 OleDb driver I am using in my ASP.NET page I must conform to VFP 8 which says you must have a GROUP BY listing each non aggregate field listed in the SELECT. This would return a record for each balance if I added GROUP BY a.balance to my query. Yes, I could issue a SET ENGINEBEHAVIOR 70 but I wanted to know if this could be done without having to revert to a previous version?

    Read the article

  • Show last 4 table entries mysql php

    - by user272899
    I have a movie database Kind of like a blog and I want to display the last 4 created entries. I have a column in my table for timestamp called 'dateadded'. Using this code how would I only display the 4 most recent entries to table <?php //connect to database mysql_connect($mysql_hostname,$mysql_user,$mysql_password); @mysql_select_db($mysql_database) or die("<b>Unable to connect to specified database</b>"); //query databae $query = "SELECT * FROM movielist"; $result=mysql_query($query) or die('Error, insert query failed'); $row=0; $numrows=mysql_num_rows($result); while($row<$numrows) { $id=mysql_result($result,$row,"id"); $imgurl=mysql_result($result,$row,"imgurl"); $imdburl=mysql_result($result,$row,"imdburl"); ?> <div class="moviebox rounded"><a href="http://<?php echo $domain; ?>/viewmovie?movieid=<?php echo $id; ?>" rel="facebox"> <img src="<?php echo $imgurl; ?>" /> <form method="get" action=""> <input type="text" name="link" class="link" style="display:none" value="http://us.imdb.com/Title?<?php echo $imdburl; ?>"/> </form> </a></div> <?php $row++; } ?>

    Read the article

  • Authlogic auto login fails on registration with STI User model

    - by Wei Gan
    Authlogin by default is supposed to auto login when the user's persistence token changes. It seems to fail in my Rails app. I set up the following single table inheritance user model hierarchy: class BaseUser < ActiveRecord::Base end class User < BaseUser acts_as_authentic end create_table "base_users", :force => true do |t| t.string "email" t.string "crypted_password" t.string "persistence_token" t.string "first_name" t.string "last_name" t.datetime "created_at" t.datetime "updated_at" t.string "type" end To get auto login to work, I need to explicitly log users in in my UsersController: def create @user = User.new(params[:user]) if @user.save UserSession.create(@user) # EXPLICITLY LOG USER IN BY CREATING SESSION flash[:notice] = "Welcome to Askapade!" redirect_to_target_or_default root_url else render :action => :new end end I was wondering if it's anything to do with STI, or that the table is named "base_users" and not "users". I set it up before without STI and it worked so I'm wondering why once I put in place this hierarchy, it fails. Thanks!

    Read the article

  • Would this method work to scale out SQL queries?

    - by David
    I have a database containing a single huge table. At the moment a query can take anything from 10 to 20 minutes and I need that to go down to 10 seconds. I have spent months trying different products like GridSQL. GridSQL works fine, but is using its own parser which does not have all the needed features. I have also optimized my database in various ways without getting the speedup I need. I have a theory on how one could scale out queries, meaning that I utilize several nodes to run a single query in parallel. The idea is to take an incoming SQL query and simply run it exactly like it is on all the nodes. When the results are returned to a coordinator node, the same query is run on the union of the resultsets. I realize that an aggregate function like average need to be rewritten into a count and sum to the nodes and that the coordinator divides the sum of the sums with the sum of the counts to get the average. What kinds of problems could not easily be solved using this model. I believe one issue would be the count distinct function. Edit: I am getting so many nice suggestions, but none have addressed the method.

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Question About DateCreated and DateModified Columns - MS SQL Server

    - by user311509
    CREATE TABLE Customer ( customerID int identity (500,20) CONSTRAINT . . dateCreated datetime DEFAULT GetDate() NOT NULL, dateModified datetime DEFAULT GetDate() NOT NULL ); When i insert a record, dateCreated and dateModified gets set to default date/time. When i update/modify the record, dateModified and dateCreated remains as is? What should i do? Obviously, i need to dateCreated value to remain as was inserted the first time and dateModified keeps changing when a change/modification occurs in the record fields. In other words, can you please write a sample quick trigger? I don't know much yet... Any help will be appreciated.

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest.

    Read the article

  • ngModel and component with isolated scope

    - by Artem Andreev
    I am creating simple ui-datetime directive. It splits javascript Date object into _date, _hours and _minutes parts. _date uses jquery ui datepicker, _hours and _minutes - number inputs. See example: http://jsfiddle.net/andreev_artem/nWsZp/3/ On github: https://github.com/andreev-artem/angular_experiments/tree/master/ui-datetime As far as I understand - best practice when you create a new component is to use isolated scope. When I tried to use isolated scope - nothing works. ngModel.$viewValue === undefined. When I tried to use new scope (my example, not so good variant imho) - ngModel uses value on newly created scope. Of course I can create directive with isolated scope and work with ngModel value through "=expression" (example). But I think that working with ngModelController is a better practice. My questions: Can I use ngModelController with isolated scope? If it is not possible which solution is better for creating such component?

    Read the article

  • Sorting by custom field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and then files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). to display tree. Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) asc/desc After that for each result which has type = 'folder' i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such a thing?

    Read the article

  • Why does Hibernate 2nd level cache only cache within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy=true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • PHP Database connection practice

    - by Phill Pafford
    I have a script that connects to multiple databases (Oracle, MySQL and MSSQL), each database connection might not be used each time the script runs but all could be used in a single script execution. My question is, "Is it better to connect to all the databases once in the beginning of the script even though all the connections might not be used. Or is it better to connect to them as needed, the only catch is that I would need to have the connection call in a loop (so the database connection would be new for X amount of times in the loop). Yeah Example Code #1: // Connections at the beginning of the script $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); for ($i=1; $i<=5; $i++) { // NOTE: might not use all the connections $rs = queryDb($query,$dbh_*); // $dbh can be any of the 3 connections } Yeah Example Code #2: // Connections in the loop for ($i=1; $i<=5; $i++) { // NOTE: Would use all the connections but connecting multiple times $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); $rs_oracle = queryDb($query,$dbh_oracle); $rs_mysql = queryDb($query,$dbh_mysql); $rs_mssql = queryDb($query,$dbh_mssql); } now I know you could use a persistent connection but would that be one connection open for each database in the loop as well? Like mysql_pconnect(), mssql_pconnect() and adodb for Oracle persistent connection method. I know that persistent connection can also be resource hogs and as I'm looking for best performance/practice.

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >