Search Results

Search found 5520 results on 221 pages for 'compound primary'.

Page 180/221 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • Selecting a good SQL Server 2008 spatial index with large polygons

    - by andynormancx
    I'm having some fun trying to pick a decent SQL Server 2008 spatial index setup for a data set I am dealing with. The dataset is polygons, representing contours over the whole globe. There are 106,000 rows in the table, the polygons are stored in a geometry field. The issue I have is that many of the polygons cover a large portion of the globe. This seems to make it very hard to get a spatial index that will eliminate many rows in the primary filter. For example, look at the following query: SELECT "ID","CODE","geom".STAsBinary() as "geom" FROM "dbo"."ContA" WHERE "geom".Filter( geometry::STGeomFromText('POLYGON ((-142.03193662573682 59.53396984952896, -142.03193662573682 59.88928136451884, -141.32743833481925 59.88928136451884, -141.32743833481925 59.53396984952896, -142.03193662573682 59.53396984952896))', 4326) ) = 1 This is querying an area which intersects with only two of the polygons in the table. No matter what combination of spatial index settings I chose, that Filter() always returns around 60,000 rows. Replacing Filter() with STIntersects() of course returns just the two polygons I want, but of course takes much longer (Filter() is 6 seconds, STIntersects() is 12 seconds). Can anyone give me any hints on whether there is a spatial index setup that is likely to improve on 60,000 rows or is my dataset just not a good match for SQL Server's spatial indexing ?

    Read the article

  • HTML form with multiple submit options

    - by phimuemue
    Hi, I'm trying to create a small web app that is used to remove items from a MySQL table. It just shows the items in a HTML table and for each item a button [delete]: item_1 [delete] item_2 [delete] ... item_N [delete] To achieve this, I dynamically generate the table via PHP into a HTML form. This form has then obviously N [delete]-buttons. The form should use the POST-method for transfering data. For the deletion I wanted to submit the ID (primary key in the MySQL table) of the corresponding item to the executing php skript. So I introduced hidden fields (all these fields have the name='ID' that store the ID of the corresponding item. However, when pressing an arbitrary [delete], it seems to submit always just the last ID (i.e. the value of the last ID hidden field). Is there any way to submit just the ID field of the corresponding item without using multiple forms? Or is it possible to submit data from multiple forms with just one submit-button? Or should I even choose any completly different way? The point why I want to do it in just one single form is that there are some "global" parameters that shall not be placed next to each item, but just once for the whole table.

    Read the article

  • Use Javascript RegEx to extract column names from SQLite Create Table SQL

    - by NimbusSoftware
    I'm trying to extract column names from a SQLite result set from sqlite_master's sql column. I get hosed up in the regular expressions in the match() and split() functions. t1.executeSql('SELECT name, sql FROM sqlite_master WHERE type="table" and name!="__WebKitDatabaseInfoTable__";', [], function(t1, result) { for(i = 0;i < result.rows.length; i++){ var tbl = result.rows.item(i).name; var dbSchema = result.rows.item(i).sql; // errors out on next line var columns = dbSchema.match(/.*CREATE\s+TABLE\s+(\S+)\s+\((.*)\).*/)[2].split(/\s+[^,]+,?\s*/); } }, function(){console.log('err1');} ); I want to parse SQL statements like these... CREATE TABLE sqlite_sequence(name,seq); CREATE TABLE tblConfig (Key TEXT NOT NULL,Value TEXT NOT NULL); CREATE TABLE tblIcon (IconID INTEGER NOT NULL PRIMARY KEY,png TEXT NOT NULL,img32 TEXT NOT NULL,img64 TEXT NOT NULL,Version TEXT NOT NULL) into a strings like theses... name,seq Key,Value IconID,png,img32,img64,Version Any help with a RegEx would be greatly appreciated.

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Linq-to-SQL: How to shape the data with group by?

    - by Cheeso
    I have an example database, it contains tables for Movies, People and Credits. The Movie table contains a Title and an Id. The People table contains a Name and an Id. The Credits table relates Movies to the People that worked on those Movies, in a particular role. The table looks like this: CREATE TABLE [dbo].[Credits] ( [Id] [int] IDENTITY (1, 1) NOT NULL PRIMARY KEY, [PersonId] [int] NOT NULL FOREIGN KEY REFERENCES People(Id), [MovieId] [int] NOT NULL FOREIGN KEY REFERENCES Movies(Id), [Role] [char] (1) NULL In this simple example, the [Role] column is a single character, by my convention either 'A' to indicate the person was an actor on that particular movie, or 'D' for director. I'd like to perform a query on a particular person that returns the person's name, plus a list of all the movies the person has worked on, and the roles in those movies. If I were to serialize it to json, it might look like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "roles": ["actor", "director"] }, { "title": "Sands of Iwo Jima", "roles": ["director"] }, { "title": "Dirty Harry", "roles": ["actor"] }, ... ] } How can I write a LINQ-to-SQL query that shapes the output like that? I'm having trouble doing it efficiently. if I use this query: int personId = 10007; var persons = from p in db.People where p.Id == personId select new { name = p.Name, movies = (from m in db.Movies join c in db.Credits on m.Id equals c.MovieId where (c.PersonId == personId) select new { title = m.Title, role = (c.Role=="D"?"director":"actor") }) }; I get something like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "role": "actor" }, { "title": "Unforgiven", "role": "director" }, { "title": "Sands of Iwo Jima", "role": "director" }, { "title": "Dirty Harry", "role": "actor" }, ... ] } ...but as you can see there's a duplicate of each movie for which Eastwood played multiple roles. How can I shape the output the way I want?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • django url user id versus userprofile id problem

    - by dana
    hello there, i have a mini comunity where each user can search and find another user profile. Userprofile is a class model, indexed differently compared to user model class (user id is not equal to userprofile id) But i cannot see a user profile by typing in the url the corresponding id. I only see the profile of the currently logged in user. Why is that? I'd also want to have in my url the username (a primary key of the user table also) and NOT the id (a number). The guilty part of the code is: what can i replace that request.user with so that it wil actually display the user i searched for, and not the currently logged in? def profile_view(request, id): u = UserProfile.objects.get(pk=id) cv = UserProfile.objects.filter(created_by = request.user) blog = New.objects.filter(created_by = request.user) return render_to_response('profile/publicProfile.html', { 'u':u, 'cv':cv, 'blog':blog, }, context_instance=RequestContext(request)) in urls (of the accounts app): url(r'^profile_view/(?P<id>\d+)/$', profile_view, name='profile_view'), and in template: <h3>Recent Entries:</h3> {% load pagination_tags %} {% autopaginate list 10 %} {% paginate %} {% for object in list %} <li>{{ object.post }} <br /> Voted: {{ vote.count }} times.<br /> {% for reply in object.reply_set.all %} {{ reply.reply }} <br /> {% endfor %} <a href=''> {{ object.created_by }}</a> <br /> {{object.date}} <br /> <a href = "/vote/save_vote/{{object.id}}/">Vote this</a> <a href="/replies/save_reply/{{object.id}}/">Comment</a> </li> {% endfor %} thanks in advance!

    Read the article

  • UUIDs in Rails3

    - by Rob Wilkerson
    I'm trying to setup my first Rails3 project and, early on, I'm running into problems with either uuidtools, my UUIDHelper or perhaps callbacks. I'm obviously trying to use UUIDs and (I think) I've set things up as described in Ariejan de Vroom's article. I've tried using the UUID as a primary key and also as simply a supplemental field, but it seems like the UUIDHelper is never being called. I've read many mentions of callbacks and/or helpers changing in Rails3, but I can't find any specifics that would tell me how to adjust. Here's my setup as it stands at this moment (there have been a few iterations): # migration class CreateImages < ActiveRecord::Migration def self.up create_table :images do |t| t.string :uuid, :limit => 36 t.string :title t.text :description t.timestamps end end ... end # lib/uuid_helper.rb require 'rubygems' require 'uuidtools' module UUIDHelper def before_create() self.uuid = UUID.timestamp_create.to_s end end # models/image.rb class Image < ActiveRecord::Base include UUIDHelper ... end Any insight would be much appreciated. Thanks.

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • I deleted a full set of columns in a save, but have the original larger sheet, Can I get that back?

    - by Ben Henley
    I have an original sheet that has over 39000 lines in it. I knocked it down to 1800 lines that I want to improt into my database, however. The Dumb#$$ that I am, I selected only the visible cells and killed like 10 columns that I need. Is there a way to compare to the original sheet using a specific column... i.e. SKU and pull the data from the origianal to put back in the missing columns or do I have to just re-edit the whole thing down again. Please help as this takes a good day or two to minimize. Any and all help is much appreciated. Below is the column list on the edited sheet vs. the original sheet. SKU DESCRIPTION VENDOR PART # RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG BLISH VENDOR # FINE LINE CLASS ITEM ACTION FLAG PRIMARY UPC STOCK U/M WEIGHT LENGTH WIDTH HEIGHT SHIP-VIA EXCLUSION HAZARDOUS CODE PRICE SUGGESTED RETAIL SKU DESCRIPTION RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG FINE LINE CLASS HAZARDOUS CODE PRICE SUGGESTED RETAIL RETAIL SENSITIVITY CODE 2ND UPC CODE 3RD UPC CODE 4TH UPC CODE HEADLINE BULLET #1 BULLET #2 BULLET #3 BULLET #4 BULLET #5 BULLET #6 BULLET #7 BULLET #8 BULLET #9 SIZE COLOR CASE QUANTITY PRODUCT LINE Thanks, Ben

    Read the article

  • Getting a users Facebook profile url

    - by Greg Pabst
    I am creating a registry site so similar people can find each other easily. I don't want to use Facebook Connect as the primary log in method or use Facebook to store their information. I'll be creating a database on my end to store that info. For security reasons I won't be displaying the users address, phone number or email address so I wanted to provide the next best way for people to connect with each other, this is where Facebook comes in. Normally I would just ask them to type their Facebook URL in a text box but I don't think most people know what their url is which is why I think I need to use Facebook Connect. So here is my idea..when the users signs up there is a check box that when checked signifies they are allowing people to find them on Facebook. I assume once they click the register button that a Facebook Connect popup will show up asking for permission to access their Facebook account. When they "allow" it, then I can get their profile url. All I need is their Facebook profile url, I don't want any other Facebook features or information. Is Facebook Connect the best thing to use for this scenario? Is there an easier way? Several months ago on the Facebook Connect site their used to be examples of doing this, but all the documentation has been rearranged and changed and I can't seem to find the information. Any help you can provide would be great!

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • From ASPX to WCF

    - by Barguast
    I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based. I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results). At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial. My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is; Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this? Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose. Are there any other approaches I should consider? Thanks for your time spent reading this. I hope you can help.

    Read the article

  • SQL Server 2005, wide indexes, computed columns, and sargable queries

    - by luksan
    In my database, assume we have a table defined as follows: CREATE TABLE [Chemical]( [ChemicalId] int NOT NULL IDENTITY(1,1) PRIMARY KEY, [Name] nvarchar(max) NOT NULL, [Description] nvarchar(max) NULL ) The value for Name can be very large, so we must use nvarchar(max). Unfortunately, we want to create an index on this column, but nvarchar(max) is not supported inside an index. So we create the following computed column and associated index based upon it: ALTER TABLE [Chemical] ADD [Name_Indexable] AS LEFT([Name], 20) CREATE INDEX [IX_Name] ON [Chemical]([Name_Indexable]) INCLUDE([Name]) The index will not be unique but we can enforce uniqueness via a trigger. If we perform the following query, the execution plan results in a index scan, which is not what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' However, if we modify the query to make it "sargable," then the execution plan results in an index seek, which is what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Indexable_Name]='[1,1''-Bicyclohexyl]-' AND [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' Is this a good solution if we control the format of all queries executed against the database via our middle tier? Is there a better way? Is this a major kludge? Should we be using full-text indexing?

    Read the article

  • Full Text Index type column is empty

    - by RemotecUk
    I am trying to create an index on a VarBinary(max) field in my SQL Server 2008 database. The steps I am taking are as follows: Table: dbo.Records Right click on table and select "Full Text Index" Then select "Define Index..." I choose the primary key which is the PK of my table (field name Id, type UniqueIndentifier). I then get the screen with the options Available Columns, Language for Word Breaker and Type Column I select my VarBinary(max) field called Chart as the Available Column by ticking the box. I select "English" as the Language for Word Breaker field. Then... I try to select the Type Column but there are no entries in here. I cannot proceed by clicking "Next" until this column is populated. Why are there no entries in this column for selection and what should be in there? Note 1: The VarBinary(max) field is linked to a file group if that makes any difference. Note 2: Also noticed that in the table designer I cannot set the full text option on that same field to "Yes" - its permanently stuck on "No". Thanks.

    Read the article

  • Zend Sessions problem with IE8

    - by Emil
    I'm running a Zend Framework powered website and it seems to have serious problems with sessions. I have a 5 step process where I save the form data in the session between the steps and then save it into the database on the last step. When we built the site sometimes the session just went away and forced us to restart. Now it seems to work again but recently we discovered an issue with Internet Explorer 8. It fails between step 2 - 3 and forgets the session. It works fine in IE6, IE7, FF, Chrome, Safari and even in my mobile web browser (SE P1). We're storing our sessions in the database and if I deactivate the session db handler it works. What's the difference between using the database and not using it for sessions? Do I loose something if I switch back? Bootstrap: /* Start session */ $saveHandler = new Zend_Session_SaveHandler_DbTable(array( 'name' => 'sessions', 'primary' => 'id', 'modifiedColumn' => 'modified', 'dataColumn' => 'data', 'lifetimeColumn' => 'lifetime' )); Zend_Session::rememberMe((int) $config->session->lifetime); $saveHandler->setLifetime((int) $config->session->lifetime) ->setOverrideLifetime(true); Zend_Session::setSaveHandler($saveHandler); Zend_Session::start(); and in my step controller $session = new Zend_Session_Namespace('wizard'); Then I'm just working with $session saving data in a stdClass in $session.

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • Post method in Servlet is not being called again after being executed once

    - by SaurabhCsIITKgp
    I am implementing a database based web application using servlets. Now, when I input a parameter using a form in the jsp page, it redirects it to a servlet which subsequently adds the value to the database (the servlet creates a new table if the table doesn't already exist). The creation of the table and the addition of value works fine if the table doesn't already exists. Once it is created however and the parameter is inputted again in the form, the submit button no longer redirects it to the servlet. Nor is the value added to the database. Kindly advise me as to where I am going wrong. Following are the snippets of my code: From the JSP page (/showmanager is the urlpattern of the servlet): <form action="showmanager" method="post"> <h3>Enter name of the show: </h3> <input type="text" name="showname" value=""> <input type="hidden" name="task" value="addshow" /> <input type="button" value="Add Show"> </form> From the servlet (POST method): p rotected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if(request.getParameter("task").equals("addshow")){ this.addShow(request.getParameter("showname")); response.sendRedirect("showmanager.jsp"); } } Method to add in database: protected boolean addShow(String showname){ try{ statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } } catch(Exception e) { try{ statement =con.prepareStatement("create table showdb10 (id int NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, date varchar(20),time varchar(20), b_total int, o_total int, b_avbl int, o_avbl int, b_price double(10,2), o_price double(10,2), seat_no varchar(20), transaction_id varchar(255), total_sales double(10,2), paymnt_artists double(10,2), paymnt_othr double(10,2), flag varchar(20), PRIMARY KEY(id))"); statement.executeUpdate(); statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } }catch(Exception e2){} } return false; }

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • Masspay and MySql

    - by Mike
    Hi, I am testing Paypal's masspay using their 'MassPay NVP example' and I having difficulty trying to amend the code so inputs data from my MySql database. Basically I have user table in MySql which contains email address, status of payment (paid,unpaid) and balance. CREATE TABLE `users` ( `user_id` int(10) unsigned NOT NULL auto_increment, `email` varchar(100) collate latin1_general_ci NOT NULL, `status` enum('unpaid','paid') collate latin1_general_ci NOT NULL default 'unpaid', `balance` int(10) NOT NULL default '0', PRIMARY KEY (`user_id`) ) ENGINE=MyISAM AUTO_INCREMENT=6 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci Data : 1 [email protected] paid 100 2 [email protected] unpaid 11 3 [email protected] unpaid 20 4 [email protected] unpaid 1 5 [email protected] unpaid 20 6 [email protected] unpaid 15 I then have created a query which selects users with an unpaid balance of $10 and above : $conn = db_connect(); $query=$conn->query("SELECT * from users WHERE balance >='10' AND status = ('unpaid')"); What I would like to is for each record returned from the query for it to populate the code below: Now the code which I believe I need to amend is as follows: for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount",); $receiversArray[$i] = $receiverData; } However I just can't get it to work, I have tried using mysqli_fetch_array and then replaced "[email protected]" with $row['email'] and "example_amount" with row['balance'] in various methods of coding but it doesn't work. Also I need it to loop to however many rows that were retrieved from the query as <3 in the for loop above. So the end result I am looking for is for the $nvpStr string to pass with something like this: $nvpStr = "&EMAILSUBJECT=test&RECEIVERTYPE=EmailAddress&CURRENCYCODE=USD&[email protected]&L_Amt=11&[email protected]&L_Amt=11&[email protected]&L_Amt=20&[email protected]&L_Amt=20&[email protected]&L_Amt=15"; Thanks

    Read the article

  • git workflow incorporating many, but not all commits from many forks

    - by becomingGuru
    I have a git repo. It has been forked several times and many independent commits are made on top of it. Everything normal, like what happens in many github hosted projects. Now, what exact workflow should I follow, if I want to see all that commits individually and apply the ones I like. The workflow I followed, which is not the optimal is to create a branch of the name github-username and merge the changes into my master and undo any changes in the commit I dont need manually (there are not many, so it worked). What I want is the ability to see all commits from different forks individually and cherry pick and apply them on top of my master. What is the workflow to follow for that? And what gui (gitk?) enables me to see all different individual commits. I realize that merge should be a primary part of the workflow and not cherry-pick as it creates a different commit (from git's point of view). Even rebasing other's changes on top of mine might not preserve the history on the graph to indicate that it is his commits I have rebased. So then, How do I ignore just a few commits from a lot of them? I think github should have a "apply this commit on top of my master" thing in their graph after each commit node; so I can just pull it, after doing all that.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >