Search Results

Search found 36756 results on 1471 pages for 'mysql real query'.

Page 620/1471 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • How do I get values of Linq Expression

    - by Yucel
    Hi, I have a method that takes Expression type parameter, in my method i want to get values of this expression but cant find out hot to do that. private User GetUser(Expression<Func<User, bool>> query) { User user = Context.User.Where(query).FirstOrDefault(); return user; } I am calling this method with different parameters like GetUser(u => u.Username == username); GetUser(u=> u.Email == email); I want to change GetUser method to work with stored procedures but i need to find what is inside query parameter I want to check if query is u.Username == username I will call GetUserByUsername SP if query is u.Email == email I will call GetuserByEmail SP

    Read the article

  • How to issue SQL Server lock hints in WCF Entity Framework?

    - by TMN
    I'm just learning the WCF entity framework (and .net in general), and I'm running into a problem specifying lock hints in an embedded SQL query. I'm trying to specify a query that has lock hints in it (e.g., "SELECT * FROM xyz WITH(XLOCK, ROWLOCK)") and I keep getting errors from the runtime that the query syntax is not valid. The query works if I enter it directly into SQL Server. Is there some special syntax for lock hints when creating IQueryable result sets, or do I have to somehow mess with the transaction isolation level (probably not possible, as I need to specify different lock hints on a subquery within the main query).

    Read the article

  • How do I make a Java ResultSet available in my jsp?

    - by melling
    I'd like to swap out an sql:query for some Java code that builds a complex query with several parameters. The current sql is a simple select. <sql:query var="result" dataSource="${dSource}" sql="select * from TABLE " </sql:query How do I take my Java ResultSet (ie. rs = stmt.executeQuery(sql);) and make the results available in my JSP so I can do this textbook JSP? To be more clear, I want to remove the above query and replace it with Java. <% ResultSet rs = stmt.executeQuery(sql); // Messy code will be in some Controller % <c:forEach var="row" items="${result.rows}" <c:out value="${row.name}"/ </c:forEach Do I set the session/page variable in the Java section or is there some EL trick that I can use to access the variable?

    Read the article

  • LinkQ Exception

    - by Ravi
    Hi, I write Linq query for keep on fetching data from database. First loop table don't have record , so throwing exception. After first loop i have record in the database, my query is working properly. Below i specified my query give some suggestion first loop(no records in table) i have to modify query or query have to change. Ex: forloop(History history in historyList) { History history1 = (from p in context.History where p.TransferCode == history.TransferCode select p).First<History>() as History; if(history1 == null) { SaveToDataBase(history); } else { UpdateToDataBase(history1); } } Thinks

    Read the article

  • Load only some columns with Hibernate native SQL queries

    - by Alessandro Dionisi
    I have a table on the database and I want to load only some columns from the result set. I defined a native sql query in the hbm file: <sql-query name="query"> <return alias="r" class="RawData"/> <![CDATA[ SELECT DESCRIPTION as {r.description} FROM RAWD_RAWDATAS r WHERE r.RAWDATA_ID=? ]]> </sql-query> This query however fails with error: could not read column value from result set: RAWDATA1_14_0_; Invalid column name SQL Error: 17006, SQLState: null, because Hibernate tries to load all fields from the result set. I found also a bug in Hibernate JIRA (http://opensource.atlassian.com/projects/hibernate/browse/HHH-3035). Anyone knows how to accomplish this task with a workaround?

    Read the article

  • Appengine backreferences - need composite index?

    - by davezor
    I have a query that is very recently starting to throw: "The built-in indices are not efficient enough for this query and your data. Please add a composite index for this query." I checked the line on which this exception is being thrown, and the problem query is this one: count = self.vote_set.filter("direction =", 1).count() This is literally a one-filter operation using appengine's built-in backreferences. I have no idea how to optimize this query...anyone have any suggestions? I tried to add this index: - kind: Vote properties: - name: direction direction: desc - kind: Vote properties: - name: direction And I got a message (obviously) saying this was an unnecessary index. Thanks for your help in advance.

    Read the article

  • How to load an entity by a key other than primary key?

    - by stacker
    In a customized servlet (seam 2.1.2) this works fine TableNameHome tableNameHome = (TableNameHome) Component.getInstance( "tableNameHome " ); tableName entity = tableNameHome.getInstance(); entity.setXXX(); tableNameHome.persit(); However this one fails: entityManager = tableNameHome .getEntityManager(); Query query = entityManager.createQuery( "SELECT b FROM tablename b WHERE b.box_id = :key2nd" ); query.setParameter( "key2nd", value); List results = query.getResultList(); and leads to this error message: org.hibernate.hql.ast.QuerySyntaxException: tablename is not mapped [SELECT b FROM tablename b WHERE b.key2nd = :key2nd] In EJB 2.1 I could implement other finder-methods. EntityHome.find() searches only by primary key. What do I need to do in order to query by a different criteria than primary key?

    Read the article

  • SQLAlchemy - loading user by username

    - by keithjgrant
    Just diving into pylons here, and am trying to get my head around the basics of SQLALchemy. I have figured out how to load a record by id: user_q = session.query(model.User) user = user_q.get(user_id) But how do I query by a specific field (i.e. username)? I assume there is a quick way to do it with the model rather than hand-building the query. I think it has something with the add_column() function on the query object, but I can't quite figure out how to use it. I've been trying stuff like this, but obviously it doesn't work: user_q = meta.Session.query(model.User).add_column('username'=user_name) user = user_q.get()

    Read the article

  • Using Linq to filter parents by their children

    - by Nick L
    Having some problems with my Silverlight app (with RIA services) filtering my results. The idea is on the client I set up the EntityQuery and its filters and call load. However this isn't working for me. Heres my code. public void FireQuery(string filterValue) { EntityQuery<Parent> query = m_ParentDomainContext.GetParentQuery(); query = query.Where(p => p.Children.Any(c => c.Name.Contains(filterValue))); m_ParentDomainContext.Load(query, Query_Completed, null); } Compiles just fine, however, runtime I get "Query operator 'Any' is not supported." Exception. Does anyone know of a good way to filter like this? Again, I'm looking for a way to set this up on the client.

    Read the article

  • Placing the where condition

    - by user182944
    I came up with the below query: SELECT ROOMNO,BUILDINGNO FROM MRM_ROOM_DETAILS WHERE ROOMID IN ( SELECT distinct roomid FROM MRM_BOOKING_DETAILS WHERE (CHECKIN NOT BETWEEN '2012-04-13 09:50:00' AND '2012-04-13 10:20:00') AND (CHECKOUT NOT BETWEEN '2012-04-13 09:50:00' AND '2012-04-13 10:20:00')) AND CAPACITY > 15 AND PROJECTIONSTATUS = 'NO'; I need to place this query in the method SQLiteDatabase.query() and fetch the rows accordingly. I am not able to understand how to place this big where condition (which contains a sub-query as well) in place of the "String selection" i.e. 3rd parameter of the method. Shall i simple write the entire where part(including the sub-query) as a string in the 3rd parameter or else there is some other better way for doing the same? Please suggest me the best way to do the same. Regards,

    Read the article

  • How to predict result set row count?

    - by Saurabh Kumar
    I have an application where I create a big SQL query dynamically for SQL server 2008. This query is based on various search criteria which the user might give such as search by lastname, firstname, ssn etc. The requirement is that if the user gives a condition due to which the formed query might return a lot of rows(configurable for max N rows), then the application must send back a message instead to the user saying that he needs to refine his search query as the existing query will return too many rows. I would not want to bring back say, 5000 rows to the client and then discard that data just to show the user an error. What is an efficient way to tackle this issue?

    Read the article

  • (database) im trying to create a form in access 2007 with 2 drop down boxes to view a report by state or name

    - by jeff orris
    im an intern at a database mngmt company and the boss is training me in access...i took the access tutorials and were definitely not enough info involved to do a what seems a simple task.my problem is this: i have a simple table with contact info with 16 colums (Local_Utility, Requested_User_Type, First_Name, Last_Name, Address 1, Address 2, Country, State, City, Zip, Phone_Number, Username\Email, Password, Confirm Password, and Parcel_Number), with 6 rows of names (keep in mind this is just a test to help me from the boss) I created a form and with 2 drop down boxes (Last Name and State) and im trying to create a view button to view an individual report for a query i made for just simple contact info with 6 colums (Last_Name, First_Name, Address1, City, State, and Phone_Number) Problem1 is that i can view the query with the view by name or state button but cant view a simple individual report from the query using the button Problem2 is that for criteria on the query i put Forms!frmMyparamForm!txtMyStateParamField for the state drop box it works, but when i use Forms!frmMyparamForm!txtMyNameParamField it doesnt and that annoying parameter box pops up Problem3 is that after i close the query, all the states and names in my dropdown box on the form disappear Im a beginner at this please help me

    Read the article

  • Finding Most Recent Order for a Particular Item

    - by visitor
    I'm trying to write a SQL Query for DB2 Version 8 which retrieves the most recent order of a specific part for a list of users. The query receives a parameter which contains a list of customerId numbers and the partId number. For example, Order Table OrderID PartID CustomerID OrderTime I initially wanted to try: Select * from Order where OrderId = ( Select orderId from Order where partId = #requestedPartId# and customerId = #customerId# Order by orderTime desc fetch first 1 rows only ); The problem with the above query is that it only works for a single user and my query needs to include multiple users. Does anyone have a suggestion about how I could expand the above query to work for multiple users? If I remove my "fetch first 1 rows only," then it will return all rows instead of the most recent. I also tried using Max(OrderTime), but I couldn't find a way to return the OrderId from the sub-select. Thanks! Note: DB2 Version 8 does not support the SQL "TOP" function.

    Read the article

  • sql server - how to execute tje second half of or only when first one fails

    - by fn79
    Suppose I have a table with following records value text company/about about Us company company company/contactus company contact I have a very simple query in sql server as below. I am having problem with the 'or' condition. In below query, I am trying to find text for value 'company/about'. If it is not found, then only I want to run the other side of 'or'. The below query returns two records as below value text company/about about Us company company Query select * from tbl where value='company/about' or value=substring('company/about',0,charindex('/','company/about')) How can I modify the query so the result set looks like value text company/about about Us

    Read the article

  • Rails 2.3.8 Compound condition

    - by Michael Guantonio
    I have a rails query that I would like to run. The only problem that I am having is the query structure. Essentially the query looks like this queryList = model.find(:all, :conditions => [id = "id"]) #returns a query list #here is the issue compound = otherModel.find(:first, :select => "an_id", :conditions => ["some_other_id=? and an_id=?, some_other_id, an_id]) Where an_id is actually a list of ids in the query list. How can I write that in rails to basically associate a single id to a list that may contain ids...

    Read the article

  • Upgrading Redmine, activerecord-mysql2-adapter not recognized

    - by David Kaczynski
    For upgrading Redmine from 1.0.1 to 2.1.2, I need to execute the command: rake db:migrate RAILS_ENV=production However, doing so produces the following error: rake aborted! Please install the mysql2 adapter: gem install activerecord-mysql2-adapter (mysql2 is not part of the bundle. Add it to Gemfile.) I have ran gem install activerecord-mysql2-adapter, but I still get the same error when I try to run the rake ... command. How do I get my RoR app to recognize that I have the mysql2 adapter installed already? or Is there something wrong with my activerecord-mysql2-adapter installation? Results of sudo bundle install: Using rake (10.0.0) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.8) Using builder (3.0.0) Using activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.8) Using activeresource (3.2.8) Using coderay (1.0.8) Using fastercsv (1.5.5) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.8) Using jquery-rails (2.0.3) Using metaclass (0.0.1) Using mocha (0.12.3) Using mysql (2.8.1) Using net-ldap (0.3.1) Using pg (0.14.1) Using ruby-openid (2.1.8) Using rack-openid (1.3.1) Using bundler (1.2.1) Using rails (3.2.8) Using rmagick (2.13.1) Using shoulda (2.11.3) Using sqlite3 (1.3.6) Using yard (0.8.3) [32mYour bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.[0m Results of sudo find / -name "*mysql2*": /var/lib/gems/1.8/doc/mysql2-0.3.11 /var/lib/gems/1.8/doc/activerecord-3.2.9/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/em_mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/gems/mysql2-0.3.11 /var/lib/gems/1.8/gems/mysql2-0.3.11/spec/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/mysql2.gemspec /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2.rb /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.c /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.h /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.o /var/lib/gems/1.8/gems/activerecord-3.2.9/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/activerecord-mysql2-adapter.gemspec /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/arel/engines/sql/compilers/mysql2_compiler.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/em_mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-3.2.8/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/cache/mysql2-0.3.11.gem /var/lib/gems/1.8/cache/activerecord-mysql2-adapter-0.0.3.gem /var/lib/gems/1.8/specifications/activerecord-mysql2-adapter-0.0.3.gemspec /var/lib/gems/1.8/specifications/mysql2-0.3.11.gemspec Contents of /usr/share/redmine/Gemfile: source 'http://rubygems.org' gem 'rails', '3.2.8' gem "jquery-rails", "~> 2.0.2" gem "i18n", "~> 0.6.0" gem "coderay", "~> 1.0.6" gem "fastercsv", "~> 1.5.0", :platforms => [:mri_18, :mingw_18, :jruby] gem "builder", "3.0.0" # Optional gem for LDAP authentication group :ldap do gem "net-ldap", "~> 0.3.1" end # Optional gem for OpenID authentication group :openid do gem "ruby-openid", "~> 2.1.4", :require => "openid" gem "rack-openid" end # Optional gem for exporting the gantt to a PNG file, not supported with jruby platforms :mri, :mingw do group :rmagick do # RMagick 2 supports ruby 1.9 # RMagick 1 would be fine for ruby 1.8 but Bundler does not support # different requirements for the same gem on different platforms gem "rmagick", ">= 2.0.0" end end # Database gems platforms :mri, :mingw do group :postgresql do gem "pg", ">= 0.11.0" end group :sqlite do gem "sqlite3" end end platforms :mri_18, :mingw_18 do group :mysql do gem "mysql" end end platforms :mri_19, :mingw_19 do group :mysql do gem "mysql2", "~> 0.3.11" end end platforms :jruby do gem "jruby-openssl" group :mysql do gem "activerecord-jdbcmysql-adapter" end group :postgresql do gem "activerecord-jdbcpostgresql-adapter" end group :sqlite do gem "activerecord-jdbcsqlite3-adapter" end end group :development do gem "rdoc", ">= 2.4.2" gem "yard" end group :test do gem "shoulda", "~> 2.11" # Shoulda does not work nice on Ruby 1.9.3 and seems to need test-unit explicitely. gem "test-unit", :platforms => [:mri_19] gem "mocha", "0.12.3" end local_gemfile = File.join(File.dirname(__FILE__), "Gemfile.local") if File.exists?(local_gemfile) puts "Loading Gemfile.local ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(local_gemfile) end # Load plugins' Gemfiles Dir.glob File.expand_path("../plugins/*/Gemfile", __FILE__) do |file| puts "Loading #{file} ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(file) end Contents of /usr/share/redmine/Gemfile.lock: GEM remote: http://rubygems.org/ specs: actionmailer (3.2.8) actionpack (= 3.2.8) mail (~> 2.4.4) actionpack (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) erubis (~> 2.7.0) journey (~> 1.0.4) rack (~> 1.4.0) rack-cache (~> 1.2) rack-test (~> 0.6.1) sprockets (~> 2.1.3) activemodel (3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) activerecord (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) arel (~> 3.0.2) tzinfo (~> 0.3.29) activeresource (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) activesupport (3.2.8) i18n (~> 0.6) multi_json (~> 1.0) arel (3.0.2) builder (3.0.0) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) railties (>= 3.1.0, < 5.0) thor (~> 0.14) json (1.7.5) mail (2.4.4) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) metaclass (~> 0.0.1) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack (>= 0.4) rack-openid (1.3.1) rack (>= 1.1.0) ruby-openid (>= 2.1.8) rack-ssl (1.3.2) rack rack-test (0.6.2) rack (>= 1.0) rails (3.2.8) actionmailer (= 3.2.8) actionpack (= 3.2.8) activerecord (= 3.2.8) activeresource (= 3.2.8) activesupport (= 3.2.8) bundler (~> 1.0) railties (= 3.2.8) railties (3.2.8) actionpack (= 3.2.8) activesupport (= 3.2.8) rack-ssl (~> 1.3.2) rake (>= 0.8.7) rdoc (~> 3.4) thor (>= 0.14.6, < 2.0) rake (10.0.0) rdoc (3.12) json (~> 1.4) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.1.3) hike (~> 1.2) rack (~> 1.0) tilt (~> 1.1, != 1.3.0) sqlite3 (1.3.6) test-unit (2.5.2) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) polyglot polyglot (>= 0.3.1) tzinfo (0.3.35) yard (0.8.3) PLATFORMS ruby DEPENDENCIES activerecord-jdbcmysql-adapter activerecord-jdbcpostgresql-adapter activerecord-jdbcsqlite3-adapter builder (= 3.0.0) coderay (~> 1.0.6) fastercsv (~> 1.5.0) i18n (~> 0.6.0) jquery-rails (~> 2.0.2) jruby-openssl mocha (= 0.12.3) mysql mysql2 (~> 0.3.11) net-ldap (~> 0.3.1) pg (>= 0.11.0) rack-openid rails (= 3.2.8) rdoc (>= 2.4.2) rmagick (>= 2.0.0) ruby-openid (~> 2.1.4) shoulda (~> 2.11) sqlite3 test-unit yard Results of gem list: actionmailer (3.2.9, 3.2.8) actionpack (3.2.9, 3.2.8) activemodel (3.2.9, 3.2.8) activerecord (3.2.9, 3.2.8) activerecord-mysql2-adapter (0.0.3) activeresource (3.2.9, 3.2.8) activesupport (3.2.9, 3.2.8) arel (3.0.2) builder (3.0.0) bundler (1.2.1) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) json (1.7.5) mail (2.4.4) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-openid (1.3.1) rack-ssl (1.3.2) rack-test (0.6.2) rails (3.2.9, 3.2.8) railties (3.2.9, 3.2.8) rake (10.0.0) rdoc (3.12) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.2.1, 2.1.3) sqlite3 (1.3.6) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) tzinfo (0.3.35) yard (0.8.3) Results of 'bundle show`: Gems included by the bundle: * actionmailer (3.2.8) * actionpack (3.2.8) * activemodel (3.2.8) * activerecord (3.2.8) * activeresource (3.2.8) * activesupport (3.2.8) * arel (3.0.2) * builder (3.0.0) * bundler (1.2.1) * coderay (1.0.8) * erubis (2.7.0) * fastercsv (1.5.5) * hike (1.2.1) * i18n (0.6.1) * journey (1.0.4) * jquery-rails (2.0.3) * json (1.7.5) * mail (2.4.4) * metaclass (0.0.1) * mime-types (1.19) * mocha (0.12.3) * multi_json (1.3.7) * mysql (2.8.1) * net-ldap (0.3.1) * pg (0.14.1) * polyglot (0.3.3) * rack (1.4.1) * rack-cache (1.2) * rack-openid (1.3.1) * rack-ssl (1.3.2) * rack-test (0.6.2) * rails (3.2.8) * railties (3.2.8) * rake (10.0.0) * rdoc (3.12) * rmagick (2.13.1) * ruby-openid (2.1.8) * shoulda (2.11.3) * sprockets (2.1.3) * sqlite3 (1.3.6) * thor (0.16.0) * tilt (1.3.3) * treetop (1.4.12) * tzinfo (0.3.35) * yard (0.8.3)

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • XML DB Content Connector unable to accept binary content due to Invalid argument(s) in call oracle.sql.BLOB.setBinaryStream(0L)

    - by sthieme
    Dear Readers, I am working on implementing a custom Document Management System using the Oracle XML DB Content Connector. See the following documentation link for details Oracle XML DB Developer's Guide 11g Release 2 (11.2)Chapter 31 Using Oracle XML DB Content Connectorhttp://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb_jcr.htm especially the following example gave me some trouble to run it successfully Sample Code to Upload Filehttp://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb_jcr.htm#ADXDB5627 I had already succeeded to set some of the properties successfully, i.e. jcr:encoding, jcr:mimeType, ojcr:displayName and ojcr:language. However setting the jcr:data property as described in the example failed consistently, both with the documented input FileStream or with a fixed string. contentNode.setProperty("jcr:data", "mystringvalue"); After some research I found the following Support Note which describes the cause for the issue in the JDBC driver version 11.2.0.1. Error "ORA-17068: Invalid argument(s) in call" Using Method setBinaryStream(0L) in JDBC 11.2.0.1 (Doc ID 1234235.1)https://support.oracle.com/epmos/faces/DocContentDisplay?id=1234235.1It can easily be solved by upgrading to JDBC 11.2.0.2 or worked around using the following property setting: java -Doracle.jdbc.LobStreamPosStandardCompliant=false ... Kind regards,Stefan C:\Oracle\Database\product\11.2.0\dbhome_1>java -Doracle.jdbc.LobStreamPosStandardCompliant=false UploadFile jdbc:oracle:oci:@localhost:1522:orcl XDB welcome1 /public MyFile.txt text/plain 19.08.2014 11:50:26 oracle.jcr.impl.OracleRepositoryImpl login INFO: JCR repository descriptors: query.xpath.pos.index = true option.versioning.supported = false jcr.repository.version = 11.1.0.0.0 option.observation.supported = false option.locking.supported = false oracle.jcr.framework.version = 11.1.0.0.0 query.xpath.doc.order = false jcr.specification.version = 1.0 jcr.repository.vendor = Oracle option.query.sql.supported = false jcr.specification.name = Content Repository for Java Technology API level.2.supported = true level.1.supported = true jcr.repository.name = XML DB Content Connector jcr.repository.vendor.url = http://www.oracle.com oracle.jcr.persistenceManagerFactory = oracle.jcr.impl.xdb.XDBPersistenceManagerFactory option.transactions.supported = false 19.08.2014 11:50:26 oracle.jcr.impl.OracleRepositoryImpl login INFO: Session Session-1 connected for user id XDB 19.08.2014 11:50:27 oracle.jcr.impl.OracleSessionImpl logout INFO: Session-1: logout instead of C:\Oracle\Database\product\11.2.0\dbhome_1>java UploadFile jdbc:oracle:oci:@localhost:1522:orcl XDB welcome1 /public MyFile.txt text/plain 19.08.2014 10:56:39 oracle.jcr.impl.OracleRepositoryImpl login INFO: JCR repository descriptors: query.xpath.pos.index = true option.versioning.supported = false jcr.repository.version = 11.1.0.0.0 option.observation.supported = false option.locking.supported = false oracle.jcr.framework.version = 11.1.0.0.0 query.xpath.doc.order = false jcr.specification.version = 1.0 jcr.repository.vendor = Oracle option.query.sql.supported = false jcr.specification.name = Content Repository for Java Technology API level.2.supported = true level.1.supported = true jcr.repository.name = XML DB Content Connector jcr.repository.vendor.url = http://www.oracle.com oracle.jcr.persistenceManagerFactory = oracle.jcr.impl.xdb.XDBPersistenceManagerFactory option.transactions.supported = false 19.08.2014 10:56:39 oracle.jcr.impl.OracleRepositoryImpl login INFO: Session Session-1 connected for user id XDB Exception in thread "main" javax.jcr.RepositoryException: Unable to accept binary content at oracle.jcr.impl.ExceptionFactory.repository(ExceptionFactory.java:142) at oracle.jcr.impl.ExceptionFactory.otherwiseFailed(ExceptionFactory.java:98) at oracle.jcr.impl.xdb.XDBPersistenceManager.acceptBinaryStream(XDBPersistenceManager.java:1421) at oracle.jcr.impl.xdb.XDBResource.setContent(XDBResource.java:898) at oracle.jcr.impl.ContentNode.setProperty(ContentNode.java:472) at oracle.jcr.impl.OracleNode.setProperty(OracleNode.java:1439) at oracle.jcr.impl.OracleNode.setProperty(OracleNode.java:460) at UploadFile.main(UploadFile.java:54) Caused by: java.sql.SQLException: Invalid argument(s) in call at oracle.jdbc.driver.T2CConnection.newOutputStream(T2CConnection.java:2392) at oracle.sql.BLOB.setBinaryStream(BLOB.java:893) at oracle.jcr.impl.xdb.XDBPersistenceManager.acceptBinaryStream(XDBPersistenceManager.java:1393) ... 5 more

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • Using rel=next and rel=prev with multiple sets of paginated content on the same page

    - by jakejgordon
    We are running into issues with trying to figure out how to implement rel="next" and rel="prev" -- coupled with rel="canonical" -- with multiple sets of paginated content on the same page, with pages in multiple cultures. In other words, how do we implement these when we have a pager for both Product Reviews and Questions and Answers (aka "Q&A") on the same page, with duplicate content across culture-specific URLs (e.g. /us/en/my-product vs. /ca/en/my-product)? Our current implementation will actually do a full postback when you click Page 2, and will add something to the query string (e.g. website.com/ca/en/my-product?previewpage=2 or website.com/ca/en/my-product?questionpage=2). If we only had one set of paginated content then the implementation would certainly be more straightforward. Adding a second set of paginated content (i.e. Q&A) complicates things. Let's assume that we want the United States English page to be the canonical target (i.e. /us/en/my-product) based on culture. If you go to the /ca/en/my-product page you'll have a rel="canonical" href="/us/en/my-product". So far so good. Let's also assume that we are not implementing a page that lists ALL Product Reviews and Q&A. This would likely solve a number of our problems by using rel="canonical" to this page, but is not an option for reasons that are out of scope for this discussion. Now if you click on page 2 of Product Reviews, it will reload the page with /ca/en/my-product?reviewpage=2 as the URL. Given this scenario, here are my questions: On page 2 of the my-product page on the Canadian site, should there be a rel="canonical" to /us/en/my-product?reviewpage=2 (assuming the content is identical in the United States and Canada)? Should the rel="prev" go to /ca/en/my-product?reviewpage=1 or should it go to /ca/en/my-product ? The query-string version would really only be accessible if using the pager and shows the exact same content as the base page. The following two questions are closely related to this one. Should the /ca/en/my-product?reviewpage=1 have a rel canonical directly to /us/en/my-product (United States page with nothing in query string) since the content is identical)? Given that Q&A content is also paginated, should there be a rel="next" on the base page without query string? In other words, should the /ca/en/my-product page have a rel="next" to /ca/en/my-product?reviewpage=2 AND rel="next" to /ca/en/my-product?questionpage=2 . So far as I can tell it doesn't make sense to have multiple rel="next" implementations on the same page. I suspect that the pages with query string values should have rel="next" and rel="prev" that only point to other pages with query strings and not to the base page. The ?reviewpage=1 and ?questionpage=1 pages would then just have a rel="canonical" to /us/en/my-product . Thoughts? I know this is a tough one -- that's why I brought it to this community. Thanks so much for your help in advance!

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >