Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 603/1102 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • (My)SQL performance: updating one field vs many unneccesary fields

    - by changokun
    i'm processing a form that has a lot of fields for a user who is editing an existing record. the user may have only changed one field, and i would typically do an update query that sets the values of all the fields, even though most of them don't change. i could do some sort of tracking to see which fields have actually changed, and only update the few that did. is there a performance difference between updating all fields in a record vs only the one that changed? are there other reasons to go with either method? the shotgun method is pretty easy...

    Read the article

  • Problem using Embarcadero ER/Studio with Postgres (With Serial PK)

    - by Paul
    Hello ... I created a table using ER/STudio 8.0.3 ... The table has a serial pk (SERIAL/INTEGER in ER/Studio)... But the ER/Studio Physical Model generated convert the Serial to Integer... And The generated table in database has a integer pk, without auto-increment functionality... Any idea? Table generated : CREATE TABLE test ( id integer NOT NULL ) Should be : CREATE TABLE test ( id serial NOT NULL )

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • transfer database from local machine to hosting server

    - by c11ada
    hey all, im trying to transfer my database from local machine to server, im using the publish to provider wizard in visual web developer to generate a scrip, im then using the generated script on the serever database. i keep getting the following error can some one please tell where im going wrong Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 53 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 58 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 87 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 92 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 48 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 52 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 79 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 83 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 93 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 15151, Level 16, State 1, Line 1 Cannot find the object 'aspnet_UsersInRoles_AddUsersToRoles', because it does not exist or you do not have permission. Msg 15151, Level 16, State 1, Line 1 Cannot find the object 'aspnet_UsersInRoles_RemoveUsersFromRoles', because it does not exist or you do not have permission. thanks

    Read the article

  • Fetch posts with attachments in a certain category?

    - by TiuTalk
    I need to retreive a list of posts that have (at least) one attachment that belongs to a category in WordPress. The relation between attachments and categories I made by myself using the WordPress default method. Here's the query that i'm running right now: SELECT p.* FROM `wp_posts` AS p # The post INNER JOIN `wp_posts` AS a # The attachment ON p.`ID` = a.`post_parent` AND a.`post_type` = 'attachment' INNER JOIN `wp_term_relationships` AS ra ON a.`ID` = ra.`object_id` AND ra.`term_taxonomy_id` IN (3) # The category ID list WHERE p.`post_type` = 'post' ORDER BY p.`post_date` DESC LIMIT 15 The problem here is that the query only use the first found attachment, and if it doesn't belongs to the category, the result isn't returned.

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • Problems getting foreign keys working in MySQL

    - by thehuby
    I've been trying to get a delete to cascade and it just doesn't seem to work. I'm sure I am missing something obvious, can anyone help me find it? I would expect a delete on the 'articles' table to trigger a delete on the corresponding rows in the 'article_section_lt' table. CREATE TABLE articles ( id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL UNIQUE, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT, date DATE NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP )ENGINE=INNODB; CREATE TABLE article_sections ( /* blog, news etc */ id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT NOT NULL DEFAULT "" )ENGINE=INNODB; CREATE TABLE article_section_lt ( fk_article_id INTEGER UNSIGNED NOT NULL REFERENCES articles(id) ON DELETE CASCADE, fk_article_section_id INTEGER UNSIGNED NOT NULL )ENGINE=INNODB;

    Read the article

  • How to define a query om a n-m table

    - by user559889
    Hi, I have some troubles defining a query. I have a Product and a Category table. A product can belong to multiple categories and vice versa so there is also a Product-Category table. Now I want to select all products that belong to a certain category. But if the user does not provide a category I want all products. I try to create a query using a join but this results in the product being selected multiple times if it belongs to multiple categories (in the case no specific category is queried). What kind of query do I have to create? Thanks

    Read the article

  • How ton put give alias name inside a Query string.?

    - by Vibin Jith
    Please look that alias name. I hope to set the value into a string var. How to put single coats inside a string which is in single coats. SET @SQLString = N'SELECT purDetQty as 'detQty',stkBatchCode as 'batchCode',purDetProductId as 'productId' INTO #ProductTable FROM PurchaseDetail INNER JOIN Stock on stkId=purDetStockId WHERE purDetID=@detId'

    Read the article

  • Can this be done with the ORM? - Django

    - by RadiantHex
    Hi folks, I have a few item listed in a database, ordered through Reddit's algorithm. This is it: def reddit_ranking(post): t = time.mktime(post.created_on.timetuple()) - 1134000000 x = post.score if x>0: y=1 elif x==0: y=-0 else: y=-1 if x<0: z=1 else: z=x return (log(z) + y * t/45000) I'm wondering if there is any clever way of using Django's ORM, in order to UPDATE the models in bulk. Without doing this: items = Item.objects.filter(created_on__gte=datetime.now()-timedelta(days=7)) for item in items: item.reddit_rank = reddit_rank(item) item.save() I know about the F() object, but I can't figure out if this function can be performed inside the ORM. Any ideas? Help would be very much appreciated!

    Read the article

  • SSIS 2005 - How to Import a Fixed Width Flat File?

    - by Greg
    I have a flat file that looks something like this: junk I don't care about \n \n columns names\n val1 val2 val3\n val1 val2 val3\n columns names \n val1 val2 val3\n I only care the lines with values. These value lines are all fixed width format and have the same line length. The other junk lines and column names can have any line width. When I try the flat file fixed width option or the ragged right option the preview looks all wrong. Any ideas what the easiest way to get this into SSIS is?

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

  • Do you have examples of un-helpful/Obscure error messages

    - by Wiretap
    Yesterday I got this error The processing instruction target matching "[xX][mM][lL]" is not allowed when I investigated, it was caused by whitespace at the very start of my XML document. Not difficult to solve, but I was struck with how unhelpful that particular error message was to identifying the actual problem. So what other examples of obscure errors do people have, and are you willing to admit to some of your own making.

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • joins- how to pass the parameter in stored procedure to execute a join query

    - by Ranjana
    i have used joins to get datas from two tables under comman name. as SELECT userValidity.UserName, userValidity.ValidFrom,userValidity.ValidUpTo,userValidity.TotalPoints, persons.SenderID FROM userValidity INNER JOIN persons ON userValidity.Username=tbl_persons.Username but i need to execute this query with oly the username which i pass as parameter in stored procedure.. how to pass the username in stored procedure in this joins. alter procedure GetNameIDUserInformation ( @user varchar(max) ) as begin SELECT userValidity.UserName, userValidity.ValidFrom,userValidity.ValidUpTo,userValidity.TotalPoints, persons.SenderID FROM userValidity INNER JOIN persons ON userValidity.Username=tbl_persons.Username end in this SP, where i have to pass the user parameter to get the single row of my user record

    Read the article

  • Insert query results into table in ms access 2010

    - by CodeMed
    I need to transform data from one schema into another in an MS Access database. This involves writing queries to select data from the old schema and then inserting the results of the queries into tables in the new schema. The below is an example of what I am trying to do. The SELECT component of the below works fine, but the INSERT component does not work. Can someone show me how to fix the below so that it effectively inserts the results of the SELECT statement into the destination table? INSERT INTO CompaniesTable (CompanyName) VALUES ( SELECT DISTINCT IIF(a.FIRM_NAME IS NULL, b.SUBACCOUNT_COMPANY_NAME, a.FIRM_NAME) AS CompanyName FROM (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS a LEFT JOIN (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS b ON a.ContactID = b.ContactID ); The definition of the target table (CompaniesTable) is: CompanyID Autonumber CompanyName Text Description Text WebSite Text Email Text TypeNumber Number

    Read the article

  • Index on column with only 2 distinct values

    - by Will
    I am wondering about the performance of this index: I have an "Invalid" varchar(1) column that has 2 values: NULL or 'Y' I have an index on (invalid), as well as (invalid, last_validated) Last_validated is a datetime (this is used for a unrelated SELECT query) I am flagging a small amount of items (1-5%) of rows in the table with this as 'to be deleted'. This is so when i DELETE FROM items WHERE invalid='Y' it does not perform a full table scan for the invalid items. A problem seems to be, the actual DELETE is quite slow now, possibly because all the indexes are being removed as they are deleted. Would a bitmap index provide better performance for this? or perhaps no index at all?

    Read the article

  • How To Get A Field Value Based On The Max Of Another Field In VFP v8.0

    - by DaveB
    So, I have a table and I want to get the value from one field in the record with the greatest DateTime() value in another field and where still another field is equal to a certain value. Example data: Balance Created MeterNumber 7924.252 02/02/2010 10:31:48 AM 2743800 7924.243 02/02/2010 11:01:37 AM 2743876 7924.227 02/02/2010 03:55:50 PM 2743876 I want to get the balance for a record with the greatest created datetime for a specific meter number. In VFP 7 I can use: SELECT a.balance ,MAX(a.created) FROM MyTable a WHERE a.meternumber = '2743876' But, in the VFP v8.0 OleDb driver I am using in my ASP.NET page I must conform to VFP 8 which says you must have a GROUP BY listing each non aggregate field listed in the SELECT. This would return a record for each balance if I added GROUP BY a.balance to my query. Yes, I could issue a SET ENGINEBEHAVIOR 70 but I wanted to know if this could be done without having to revert to a previous version?

    Read the article

  • What is the best way to return result from business layer to presentation layer when using linq - I

    - by samsur
    I have a business layer that has DTOs that are used in the presentation layer. This application uses entity framework. Here is an example of a class called RoleDTO public class RoleDTO { public Guid RoleId { get; set; } public string RoleName { get; set; } public string RoleDescription { get; set; } public int? OrganizationId { get; set; } } In the BLL I want to have a method that returns a list of DTO.. I would like to know which is the better approach: returning IQueryable or list of DTOs. Although i feel that returning Iqueryable is not a good idea because the connection needs to be open. Here are the 2 different methods using the different approaches public class RoleBLL { private servicedeskEntities sde; public RoleBLL() { sde = new servicedeskEntities(); } public IQueryable<RoleDTO> GetAllRoles() { IQueryable<RoleDTO> role = from r in sde.Roles select new RoleDTO() { RoleId = r.RoleID, RoleName = r.RoleName, RoleDescription = r.RoleDescription, OrganizationId = r.OrganizationId }; return role; } Note: in the above method the datacontext is a private attribute and set in the constructor, so that the connection stays opened. Second approach public static List GetAllRoles() { List roleDTO = new List(); using (servicedeskEntities sde = new servicedeskEntities()) { var roles = from pri in sde.Roles select new { pri.RoleID, pri.RoleName, pri.RoleDescription }; //Add the role entites to the DTO list and return. This is necessary as anonymous types can be returned acrosss methods foreach (var item in roles) { RoleDTO roleItem = new RoleDTO(); roleItem.RoleId = item.RoleID; roleItem.RoleDescription = item.RoleDescription; roleItem.RoleName = item.RoleName; roleDTO.Add(roleItem); } return roleDTO; } Please let me know, if there is a better approach - Thanks,

    Read the article

  • How to give weight to full matches over partial matches (PostgreSQL)

    - by kagaku
    I've got a query that takes an input searches for the closet match in zipcode/region/city/metrocode in a location table containing a few tens of thousands of entries (should be nearly every city in the US). The query I'm using is: select metrocode, region, postalcode, region_full, city from dv_location where ( region ilike '%Chicago%' or postalcode ilike '%Chicago%' or city ilike '%Chicago%' or region_full ilike'%Chicago%' ) and metrocode is not null Odd thing is, the results set I'm getting back looks like this: metrocode;region;postalcode;region_full;city 862;CA;95712;California;Chicago Park 862;CA;95712;California;Chicago Park 602;IL;60611;Illinois;Chicago 602;IL;60610;Illinois;Chicago What am I doing wrong? My thinking is that Chicago would have greater weight than Chicago Park since Chicago is an exact match to the term (even though I'm asking for a wildcard match on the term).

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >