Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 297/327 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Using conditionals in Linq Programatically

    - by Mike B
    I was just reading a recent question on using conditionals in Linq and it reminded me of an issue I have not been able to resolve. When building Linq to SQL queries programatically how can this be done when the number of conditionals is not known until runtime? For instance in the code below the first clause creates an IQueryable that, if executed, would select all the tasks (called issues) in the database, the 2nd clause will refine that to just issues assigned to one department if one has been selected in a combobox (Which has it's selected item bound to the departmentToShow property). How could I do this using the selectedItems collection instead? IQueryable<Issue> issuesQuery; // Will select all tasks issuesQuery = from i in db.Issues orderby i.IssDueDate, i.IssUrgency select i; // Filters out all other Departments if one is selected if (departmentToShow != "All") { issuesQuery = from i in issuesQuery where i.IssDepartment == departmentToShow select i; } By the way, the above code is simplified, in the actual code there are about a dozen clauses that refine the query based on the users search and filter settings.

    Read the article

  • Linq is returning too many results when joined

    - by KallDrexx
    In my schema I have two database tables. relationships and relationship_memberships. I am attempting to retrieve all the entries from the relationship table that have a specific member in it, thus having to join it with the relationship_memberships table. I have the following method in my business object: public IList<DBMappings.relationships> GetRelationshipsByObjectId(int objId) { var results = from r in _context.Repository<DBMappings.relationships>() join m in _context.Repository<DBMappings.relationship_memberships>() on r.rel_id equals m.rel_id where m.obj_id == objId select r; return results.ToList<DBMappings.relationships>(); } _Context is my generic repository using code based on the code outlined here. The problem is I have 3 records in the relationships table, and 3 records in the memberships table, each membership tied to a different relationship. 2 membership records have an obj_id value of 2 and the other is 3. I am trying to retrieve a list of all relationships related to object #2. When this linq runs, _context.Repository<DBMappings.relationships>() returns the correct 3 records and _context.Repository<DBMappings.relationship_memberships>() returns 3 records. However, when the results.ToList() executes, the resulting list has 2 issues: 1) The resulting list contains 6 records, all of type DBMappings.relationships(). Upon further inspection there are 2 for each real relationship record, both are an exact copy of each other. 2) All relationships are returned, even if m.obj_id == 3, even though objId variable is correctly passed in as 2. Can anyone see what's going on because I've spent 2 days looking at this code and I am unable to understand what is wrong. I have joins in other linq queries that seem to be working great, and my unit tests show that they are still working, so I must be doing something wrong with this. It seems like I need an extra pair of eyes on this one :)

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Cocoa Services Programming - How to identify whether the selected item is a folder or file wih NSFi

    - by rockybalboa
    Hi , I have some queries regarding Services menu validation . I would like to enable different services provided by my app based on whether a file or folder is selected in the Finder. I have set NSFilenamesPboardType as the send type for the services . I have gone through the - (id)validRequestorForSendType:(NSString *)sendType returnType:(NSString *)returnType method but my issue is that the validation there seems to be done based on the sendType and return type. In my case , the selected file and folder pasteboard type is the same and I cannot determine whether the selected item in the Finder is a file or folder during the validation process ( This is before the actual service gets invoked i.e when the services menu is being shown to the user ) ? So my question is that is there any way I can get some info about the selected item in the Finder and validate the different service menus offered by my application based on some info regarding the item rather than the basic validation of the send and return types ? I am not able to find out any manner to do so but "Folder Actions" service in Snow Leopard gets enabled only for folders so it can be done. I did a /System/Library/CoreServices/pbs -dump_pboard and it is using a NSFilenamePBoardType also yet manages to activate only for folders. Thanks in advace for any help .

    Read the article

  • SQL placeholder in WHERE IN issue, inserted strings fail.

    - by Alastair Pitts
    As part of my jobs, I need to writes SQL queries to connect to our PI database. To generate a query, I need to pass an array of tags (essentially primary keys), but these have to be inserted as strings. As this will be a modular query and used for multiple tags, a placeholder is being used. The query relies upon the use of the WHERE IN statement, which is where the placeholder is, like below: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN (?) AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' The issue is the format in which the tags are to be passed in as. If I add them directly into the query (for testing), they go in the format (these are example numbers): '000000012','00000032','005050236','4560236' and the query looks like: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN ('000000012','00000032','005050236','4560236') AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' which works. If I try and add the same tags into the placeholder, the query fails. If I only add 1 tag, with no quotes (using the placeholder), the query works. Why is this happening? Is there anyway around it?

    Read the article

  • What is a good way to assign order #s to ordered rows in a table in Sybase

    - by DVK
    I have a table T (structure below) which initially contains all-NULL values in an integer order column: col1 varchar(30), col2 varchar(30), order int NULL I also have a way to order the "colN" columns, e.g. SELECT * FROM T ORDER BY some_expression_involving_col1_and_col2 What's the best way to assign - IN SQL - numeric order values 1-N to the order table, so that the order values match the order of rows returned by the above ORDER BY? In other words, I would like a single query (Sybase SQL syntax so no Oracle's rowcount) which assigns order values so that SELECT * FROM T ORDER BY order returns 100% same order of rows as the query above. The query does NOT necessarily need to update the table T in place, I'm OK with creating a copy of the table T2 if that'll make the query simpler. NOTE1: A solution must be real query or a set of queries, not involving a loop or a cursor. NOTE2: Assume that the data is uniquely orderable according to the order by above - no need to worry about situation when 2 rows can be assigned the same order at random. NOTE3: I would prefer a generic solution, but if you wish a specific example of ordering expression, let's say: SELECT * FROM T ORDER BY CASE WHEN col1="" THEN "AAAAAA" ELSE col1 END, ISNULL(col2, "ZZZ")

    Read the article

  • ColdFusion 8 Slow Performance

    - by JoeBob
    We have started a new CF8 app and it is running dog slow. A test where we go around ColdFusion (queries within a database utility) show normal speed (80ms). CF8 returns the same query in something like 60 to 80 seconds! I have been looking online and seeing lots of posts about CF8 and performance problems, but don't get any overall sense of a solution; just lots of people trying things and saying that they didn't have the problem with CF7. We are also seeing instability on the server, and some errors relating to garbage collection and the memory heap. We have a number of other applications running on CF8 and they perform adequately...our programmer is not an expert or a guru, he just plugs away. We have isolated this down to a single query that takes forever to return, so it is not a complicated test. Are there any known CF8 problems or obvious tweaks that we should consider trying? If we have to start over and learn a new environment, I will never make deadline. JoeBob

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • Understanding many to many relationships and Entity Framework

    - by Anders Svensson
    I'm trying to understand the Entity Framework, and I have a table "Users" and a table "Pages". These are related in a many-to-many relationship with a junction table "UserPages". First of all I'd like to know if I'm designing this relationship correctly using many-to-many: One user can visit multiple pages, and each page can be visited by multiple users..., so am I right in using many2many? Secondly, and more importantly, as I have understood m2m relationships, the User and Page tables should not repeat information. I.e. there should be only one record for each user and each page. But then in the entity framework, how am I able to add new visits to the same page for the same user? That is, I was thinking I could simply use the Count() method on the IEnumerable returned by a LINQ query to get the number of times a user has visited a certain page. But I see no way of doing that. In Linq to Sql I could access the junction table and add records there to reflect added visits to a certain page by a certain user, as many times as necessary. But in the EF I can't access the junction table. I can only go from User to a Pages collection and vice versa. I'm sure I'm misunderstanding relationships or something, but I just can't figure out how to model this. I could always have a Count column in the Page table, but as far as I have understood you're not supposed to design database tables like that, those values should be collected by queries... Please help me understand what I'm doing wrong...

    Read the article

  • memory leak in php script

    - by Jasper De Bruijn
    Hi, I have a php script that runs a mysql query, then loops the result, and in that loop also runs several queries: $sqlstr = "SELECT * FROM user_pred WHERE uprType != 2 AND uprTurn=$turn ORDER BY uprUserTeamIdFK"; $utmres = mysql_query($sqlstr) or trigger_error($termerror = __FILE__." - ".__LINE__.": ".mysql_error()); while($utmrow = mysql_fetch_array($utmres, MYSQL_ASSOC)) { // some stuff happens here // echo memory_get_usage() . " - 1241<br/>\n"; $sqlstr = "UPDATE user_roundscores SET ursUpdDate=NOW(),ursScore=$score WHERE ursUserTeamIdFK=$userteamid"; if(!mysql_query($sqlstr)) { $err_crit++; $cLog->WriteLogFile("Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid\n"); echo "Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid<br>\n"; break; } unset($sqlstr); // echo memory_get_usage() . " - 1253<br/>\n"; // some stuff happens here too } The update query never fails. For some reason, between the two calls of memory_get_usage, there is some memory added. Because the big loop runs about 500.000 or more times, in the end it really adds up to alot of memory. Is there anything I'm missing here? could it herhaps be that the memory is not actually added between the two calls, but at another point in the script? Edit: some extra info: Before the loop it's at about 5mb, after the loop about 440mb, and every update query adds about 250 bytes. (the rest of the memory gets added at other places in the loop). The reason I didn't post more of the "other stuff" is because its about 300 lines of code. I posted this part because it looks to be where the most memory is added.

    Read the article

  • How do I get PHP variables from this MySQL query?

    - by CT
    I am working on an Asset Database problem using PHP / MySQL. In this script I would like to search my assets by an asset id and have it return all related fields. First I query the database asset table and find the asset's type. Then depending on the type I run 1 of 3 queries. <?php //make database connect mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); //get type of asset $type = mysql_query(" SELECT asset.type From asset WHERE asset.id = 93120 ") or die(mysql_error()); switch ($type){ case "Server": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serial_number ,server.esc ,server.user ,server.prev_user ,server.warranty FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = 93120 "); break; case "Laptop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,laptop.manufacturer ,laptop.model ,laptop.serial_number ,laptop.esc ,laptop.user ,laptop.prev_user ,laptop.warranty FROM asset LEFT JOIN laptop ON laptop.id = asset.id WHERE asset.id = 93120 "); break; case "Desktop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,desktop.manufacturer ,desktop.model ,desktop.serial_number ,desktop.esc ,desktop.user ,desktop.prev_user ,desktop.warranty FROM asset LEFT JOIN desktop ON desktop.id = asset.id WHERE asset.id = 93120 "); break; } ?> So far I am able to get asset.type into $type. How would I go about getting the rest of the variables (laptop.model to $model, asset.notes to $notes and so on)? Thank you.

    Read the article

  • Understanding MongoDB (and NoSQL in general) and how to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have three collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is, what if the author's name changed? You'd have to go through updating a ton of posts and comments. But is this an okay thing to do with NoSQL?

    Read the article

  • How do I construct a more complex single LINQ to XML query?

    - by Cyberherbalist
    I'm a LINQ newbie, so the following might turn out to be very simple and obvious once it's answered, but I have to admit that the question is kicking my arse. Given this XML: <measuresystems> <measuresystem name="SI" attitude="proud"> <dimension name="mass" dim="M" degree="1"> <unit name="kilogram" symbol="kg"> <factor name="hundredweight" foreignsystem="US" value="45.359237" /> <factor name="hundredweight" foreignsystem="Imperial" value="50.80234544" /> </unit> </dimension> </measuresystem> </measuresystems> I can query for the value of the conversion factor between kilogram and US hundredweight using the following LINQ to XML, but surely there is a way to condense the four successive queries into a single complex query? XElement mss = XElement.Load(fileName); IEnumerable<XElement> ms = from el in mss.Elements("measuresystem") where (string)el.Attribute("name") == "SI" select el; IEnumerable<XElement> dim = from e2 in ms.Elements("dimension") where (string)e2.Attribute("name") == "mass" select e2; IEnumerable<XElement> unit = from e3 in dim.Elements("unit") where (string)e3.Attribute("name") == "kilogram" select e3; IEnumerable<XElement> factor = from e4 in unit.Elements("factor") where (string)e4.Attribute("name") == "pound" && (string)e4.Attribute("foreignsystem") == "US" select e4; foreach (XElement ex in factor) { Console.WriteLine ((string)ex.Attribute("value")); }

    Read the article

  • Erlang ODBC parameter query with null parameters

    - by Schlomer
    Is it possible to pass null values to parameter queries? For example Sql = "insert into TableX values (?,?)". Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}]. % Val2 may be a float, or it may be the atom, undefined odbc:param_query(OdbcRef, Sql, Params). Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as: Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)". with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there. Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it. Having a parameterized query for each permutation is simply not an option. Any thoughts or ideas would be fantastic! Thank you!

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • SQL using sum to count results of multiple subqueries

    - by asdas
    I have a table with 2 columns: integer and var char. I am given only the integer values but need to do work on the var char (string) values. Given an integer, and a list of other integers (no overlap), I want to find the string for that single integer. Then I want to take that string and do the INSTR command with that string, and all the other strings for all the other integers. Then I want the sum of all the INSTR so the result is one number. So lets say I have int x, and list y=[y0, y1, y2]. I want to do 3 INSTR commands like SUM(INSTR(string for x, string for y0), INSTR(string for x, string for y1), INSTR(string for x, string for y2)) I think im going in the wrong direction, this is what I have. Im not good with sub queries. SELECT SUM ( SELECT INSTR ( SELECT string FROM pages WHERE int=? LIMIT 1, ( SELECT string FROM pages WHERE id=? OR id=? OR id=? LIMIT 3 ) ) )

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • Creating has_many :through records 2x times

    - by antiarchitect
    I have models class Question < ActiveRecord::Base WEIGHTS = %w(medium hard easy) belongs_to :test has_many :answers, :dependent => :destroy has_many :testing_questions end class Testing < ActiveRecord::Base belongs_to :student, :foreign_key => 'user_id' belongs_to :subtest has_many :testing_questions, :dependent => :destroy has_many :questions, :through => :testing_questions end So when I try to bind questions to testing on it's creation: >> questions = Question.all ... >> questions.count => 3 >> testing = Testing.create(:user_id => 3, :subtest_id => 1, :questions => questions) Testing Columns (0.9ms) SHOW FIELDS FROM `testings` SQL (0.1ms) BEGIN SQL (0.1ms) COMMIT SQL (0.1ms) BEGIN Testing Create (0.3ms) INSERT INTO `testings` (`created_at`, `updated_at`, `user_id`, `subtest_id`) VALUES('2010-05-18 00:53:05', '2010-05-18 00:53:05', 3, 1) TestingQuestion Columns (0.9ms) SHOW FIELDS FROM `testing_questions` TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(1, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.4ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(2, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(3, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(1, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(2, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(3, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) SQL (90.2ms) COMMIT => #<Testing id: 31, subtest_id: 1, user_id: 3, created_at: "2010-05-18 00:53:05", updated_at: "2010-05-18 00:53:05"> There is 6 SQL queries and 6 records in testing_questions are created. Why?

    Read the article

  • How do I retrieve an automated report and save it to a database?

    - by Mason Wheeler
    I've got a web server that will take scripts in Python, PHP or Perl. I don't know much about any of those languages, but of the three, Python seems the least scary. It has a MySql database set up, and I know enough SQL to manage it and write queries for it. I also have a program that I want to add automated error reporting to. Something goes wrong, it sends a bug report to my server. What I don't know how to do is write a Python script that will sit on the web server and, when my program sends in a bug report, do the following: Receive the bug report. Parse it out into sections. Insert it into the database. Have the server send me an email. From what little I understand, this seems like it shouldn't be too difficult if I only knew what I was doing. Could someone point me to a site that explains the basic principles I'd need to create a script like this?

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >