Search Results

Search found 17771 results on 711 pages for 'mongodb query'.

Page 88/711 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Cannot Install mysql-query-browser in Ubuntu 12.04

    - by doy
    Please help..... I tried to install mysql-query-browser in my Ubuntu 12.04 GUI, I get the error message: Updating mysql-administrator installation paths... ./mysql-administrator: 59: ./mysql-administrator: cannot create /opt/mysql-gui-tools-5.0/lib/pangorc.bak: Permission denied Error updating files for new installation path. Please make sure mysql-administrator is installed correctly and you have proper write permissions in the installation directory. Tried the ..... /opt/mysql-gui-tools-5.0$ ./mysql-administrator --update-paths . and /opt/mysql-gui-tools-5.0$ ./mysql-administrator --update-paths but still no success.........

    Read the article

  • Search Engine Query Word Order

    - by EoghanM
    I've pages with titles like 'Alpha with Beta'. For every such page, there is an inverse page 'Beta with Alpha'. Both pages link to each other. When someone on Google searches for 'Beta with Alpha', I'd like them to land on the correct page, but sometimes 'Alpha with Beta' ranks higher (or vice versa). I was thinking of inspecting the referral link when a visitor arrives on my site, and silently redirecting them to the correct page based on what they actually searched for. Just wondering if this could be penalized by Google as 'cloaking/sneaky redirects'? Or is there a better way to ensure that the correct page on my site ranks higher for the matching query?

    Read the article

  • SQL SERVER Get Latest SQL Query for Sessions DMV

    In recent SQL Training I was asked, how can one figure out what was the last SQL Statement executed in sessions. The query for this is very simple. It uses two DMVs and created following quick script for the same. SELECT session_id, TEXT FROM sys.dm_exec_connections CROSS APPLYsys.dm_exec_sql_text(most_recent_sql_handle) AS ST While working with DMVs if you [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using UNION in a query with PHP [migrated]

    - by work
    I have an SQL query which links 3 tables using UNION: $sql ="(SELECT Drive.DriveID,Ram.Memory from Drive,Ram where Drive.DriveID = Ram.RamID) UNION (SELECT Drive.DriveID,External.Memory from Drive, External where Drive.DriveID = External.ExtID)"; Suppose I want to get Ram.Name as well. How do I do this? If I use Ram.Name in the first SELECT statement it would not produce the correct result. Any method for tackling this? I want to do it using UNION.

    Read the article

  • Learning MySQL Query optimization

    - by recluze
    I've been doing web/desktop/server development for a while and have worked with many databases (mysql mostly). I've come to the point in my career when I need to have someone look at my queries because they're 'kind of slow'. I believe it's now time to start learning query optimization. While I know the basics of index and joins etc., I'm not familiar with how to use, say, the EXPLAIN output to improve performance of my queries. I have not been able to find any online material that starts with the basics and takes me to application. Getting a book is not an option right now so I'm looking for tips about how to proceed with this. I hope this question is general enough not to get closed.

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • Foward slash in kibana 3 query

    - by G Mawr
    I'm trying to add a query that will match a request that ends with a slash, like this one: n.n.n.n - - [16/Oct/2013:16:40:41 +0100] "GET / HTTP/1.1" 200 25058 "-" "Mozilla/5.0 (iPad; CPU OS 7_0_2 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A501 Safari/9537.53" I'm using the Lucene query type. If my query is set to *, I see the event. If I set it to request:"css", I see CSS requests, as expected. However, all of the following yield no results: request:"/" request:"\/" request:"\\/" I tried a Lucene regular expression, with no luck: request:/\// I note that someone else is getting what appears to be a similar issue, although that's on Kibana 2: https://github.com/rashidkpc/Kibana/issues/401 How can I query for requests that end with a / character?

    Read the article

  • SQL Query to update parent record with child record values

    - by Wells
    I need to create a Trigger that fires when a child record (Codes) is added, updated or deleted. The Trigger stuffs a string of comma separated Code values from all child records (Codes) into a single field in the parent record (Projects) of the added, updated or deleted child record. I am stuck on writing a correct query to retrieve the Code values from just those child records that are the children of a single parent record. -- Create the test tables CREATE TABLE projects ( ProjectId varchar(16) PRIMARY KEY, ProjectName varchar(100), Codestring nvarchar(100) ) GO CREATE TABLE prcodes ( CodeId varchar(16) PRIMARY KEY, Code varchar (4), ProjectId varchar(16) ) GO -- Add sample data to tables: Two projects records, one with 3 child records, the other with 2. INSERT INTO projects (ProjectId, ProjectName) SELECT '101','Smith' UNION ALL SELECT '102','Jones' GO INSERT INTO prcodes (CodeId, Code, ProjectId) SELECT 'A1','Blue', '101' UNION ALL SELECT 'A2','Pink', '101' UNION ALL SELECT 'A3','Gray', '101' UNION ALL SELECT 'A4','Blue', '102' UNION ALL SELECT 'A5','Gray', '102' GO I am stuck on how to create a correct Update query. Can you help fix this query? -- Partially working, but stuffs all values, not just values from chile (prcodes) records of parent (projects) UPDATE proj SET proj.Codestring = (SELECT STUFF((SELECT ',' + prc.Code FROM projects proj INNER JOIN prcodes prc ON proj.ProjectId = prc.ProjectId ORDER BY 1 ASC FOR XML PATH('')),1, 1, '')) The result I get for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Blue,Gray,Gray,Pink ... But the result I need for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Pink,Gray ... Here is my start on the Trigger. The Update query, above, will be added to this Trigger. Can you help me complete the Trigger creation query? CREATE TRIGGER Update_Codestring ON prcodes AFTER INSERT, UPDATE, DELETE AS WITH CTE AS ( select ProjectId from inserted union select ProjectId from deleted )

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • Can the mysql slow query log show milliseconds?

    - by Chase Seibert
    The mysql slow query log shows query time in whole integers. # Query_time: 0 Lock_time: 0 Rows_sent: 177 Rows_examined: 177 SELECT ... # Query_time: 1 Lock_time: 0 Rows_sent: 56 Rows_examined: 208 SELECT ... There was a microsecond patch to allow mysql to be configured to log queries that take longer than X microseconds to run. But is there a way to have the log output the query time in either milliseconds or microseconds?

    Read the article

  • Hibernate Que related to database i want select query generic without using property name

    - by Sudhir Gudhe
    Hi My que is i tried to get the data from data base using String SQL_QUERY ="from dat_personal_info "; Query query = session.createQuery(SQL_QUERY); for( it=query.iterate();it.hasNext();) { Object[] rowObject =(Object[]) it.next(); } but error occurs Hibernate: select dat_person0_.row_id as col_0_0_ from dat_personal_info dat_per son0_ java.lang.ClassCastException: bn.com.server.database.maptables.dat_personal_info $$EnhancerByCGLIB$$e1ffd36e cannot be cast to [Ljava.lang.Object; pls any who ans pls reply Thanks & Regards Sudhir gudhe

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Oracle - Return shortest string value in a set of rows

    - by Sridhar
    Hi, I'm trying to write a query that returns the shortest string value in the column. For ex: if ColumnA has values ABCDE, ZXDR, ERC, the query should return "ERC". I've written the following query, but I'm wondering if there is any better way to do this? BTW, the query should return a single value. select distinct ColumnA from ( select ColumnA, rank() over (order by length(ColumnA), ColumnA) len_rank from TableA where ColumnB = 'XXX' ) where len_rank <= 1 Thank you.

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Lucene's nested query evaluation regarding negation

    - by ponzao
    Hi, I am adding Apache Lucene support to Querydsl (which offers type-safe queries for Java) and I am having problems understanding how Lucene evaluates queries especially regarding negation in nested queries. For instance the following two queries in my opinion are semantically the same, but only the first one returns results. +year:1990 -title:"Jurassic Park" +year:1990 +(-title:"Jurassic Park") The simplified object tree in the second example is shown below. query : Query clauses : ArrayList [0] : BooleanClause "MUST" occur : BooleanClause.Occur "year:1990" query : TermQuery [1] : BooleanClause "MUST" occur : BooleanClause.Occur query : BooleanQuery clauses : ArrayList [0] : BooleanClause "MUST_NOT" occur : BooleanClause.Occur "title:"Jurassic Park"" query : TermQuery Lucene's own QueryParser seems to evaluate "AND (NOT" into the same kind of object trees. Is this a bug in Lucene or have I misunderstood Lucene's query evaluation? I am happy to give more information if necessary.

    Read the article

  • GWT Query fails second time -only.

    - by Koran
    HI, I have a visualization function in GWT which calls for two instances of the same panels - two queries. Now, suppose one url is A and the other url is B. Here, I am facing an issue in that if A is called first, then both A and B works. If B is called first, then only B works, A - times out. If I call both times A, only the first time A works, second time it times out. If I call B twice, it works both times without a hitch. Even though the error comes at timed out, it actually is not timing out - in FF status bar, it shows till - transferring data from A, and then it gets stuck. This doesnt even show up in the first time query. The only difference between A and B is that B returns very fast, while A returns comparitively slow. The sample code is given below: public Panel(){ Runnable onLoadCallback = new Runnable() { public void run() { Query query = Query.create(dataUrl); query.setTimeout(60); query.send(new Callback() { public void onResponse(QueryResponse response) { if (response.isError()){ Window.alert(response.getMessage()); } } } } VisualizationUtils.loadVisualizationApi(onLoadCallback, PieChart.PACKAGE); } What could be the reason for this? I cannot think of any reason why this should happen? Why is this happening only for A and not for B? EDIT: More research. The query which works all the time (i.e. B is the example URL given in GWT visualization site: see comment [1]). So, I tried in my app engine to reproduce it - the following way s = "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});"; response = HttpResponse(s, content_type="text/plain; charset=utf-8") response['Expires'] = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime()) return response Where s is the data when we run the query for B. I tried to add Expires etc too, since that seems to be the only header which has the difference, but now, the query fails all the time. For more info - I am now sending the difference between my server response vs the working server response. They seems to be pretty similar. HTTP/1.0 200 OK Content-Type: text/plain Date: Wed, 16 Jun 2010 11:07:12 GMT Server: Google Frontend Cache-Control: private, x-gzip-ok="" google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Mac$ telnet spreadsheets.google.com 80 Trying 209.85.231.100... Connected to spreadsheets.l.google.com. Escape character is '^]'. GET http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1 HTTP/1.0 200 OK Content-Type: text/plain; charset=UTF-8 Date: Wed, 16 Jun 2010 11:07:58 GMT Expires: Wed, 16 Jun 2010 11:07:58 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Server: GSE google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Also, please note that App engine did not allow the Expires header to go through - can that be the reason? But if that is the reason, then it should not fail if B is sent first and then A. Comment [1] : http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • Where clause in joins vs Where clause in Sub Query

    - by Kanavi
    DDL create table t ( id int Identity(1,1), nam varchar(100) ) create table t1 ( id int Identity(1,1), nam varchar(100) ) DML Insert into t( nam)values( 'a') Insert into t( nam)values( 'b') Insert into t( nam)values( 'c') Insert into t( nam)values( 'd') Insert into t( nam)values( 'e') Insert into t( nam)values( 'f') Insert into t1( nam)values( 'aa') Insert into t1( nam)values( 'bb') Insert into t1( nam)values( 'cc') Insert into t1( nam)values( 'dd') Insert into t1( nam)values( 'ee') Insert into t1( nam)values( 'ff') Query - 1 Select t.*, t1.* From t t Inner join t1 t1 on t.id = t1.id Where t.id = 1 Query 1 SQL profiler Result Reads = 56, Duration = 4 Query - 2 Select T1.*, K.* from ( Select id, nam from t Where id = 1 )K Inner Join t1 T1 on T1.id = K.id Query 2 SQL Profiler Results Reads = 262 and Duration = 2 You can also see my SQlFiddle Query - Which query should be used and why?

    Read the article

  • python mongokit Connection() AssertionError

    - by zalew
    just installed mongokit and can't figure out why I get AssertionError python console: >>> from mongokit import Connection >>> c = Connection() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/mongokit-0.5.3-py2.6.egg/mongokit/connection.py", line 35, in __init__ super(Connection, self).__init__(*args, **kwargs) File "build/bdist.linux-i686/egg/pymongo/connection.py", line 169, in __init__ File "build/bdist.linux-i686/egg/pymongo/connection.py", line 338, in __find_master File "build/bdist.linux-i686/egg/pymongo/connection.py", line 226, in __master File "build/bdist.linux-i686/egg/pymongo/database.py", line 220, in command File "build/bdist.linux-i686/egg/pymongo/collection.py", line 356, in find_one File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 485, in next File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 461, in _refresh File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 429, in __send_message File "build/bdist.linux-i686/egg/pymongo/helpers.py", line 98, in _unpack_response AssertionError >>> mongodb console: Wed Mar 31 10:27:34 connection accepted from 127.0.0.1:60480 #30 Wed Mar 31 10:27:34 end connection 127.0.0.1:60480 db 1.5 pymongo 1.5 (tested also on 1.4.) mongokit 0.5.3 (also 0.5.2)

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Linq causes collection to disappear when trying to use OrderByDescending

    - by Jeremy B.
    For background, I am using MongoDB and Rob Conery's linq driver. The code I am attempting is thus: using (var session = new Session<ContentItem>()) { var contentCollection = session.QueryCollection.Where(x => x.CreatedOn < DateTime.Now).OrderByDescending(y => y.CreatedOn).ToList(); ViewData.Model = contentCollection; } this will work on one machine, but on another machine I get back no results. To get results i have to do using (var session = new Session<ContentItem>()) { var contentCollection = session.QueryCollection.Where(x => x.CreatedOn < DateTime.Now).ToList(); ViewData.Model = contentCollection.OrderByDescending(y => y.CreatedOn).ToList(); } I have to do ToList() on both lines, or no results. If I try to chain anything it breaks. This is the same project, all dll's are locally loaded. Both machines have the same framework, versions of Visual studio and addons. the only difference is one has VisualSVN the other AnkhSVN. I can't see those causing the problem. Also, while debugging, on the machine that does not work you can see the items in the collection, and if you remove ordering all together it will work. This has got me completely stumped.

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • Google App Engine - DELETE JPQL Query and Cascading

    - by Taylor Leese
    I noticed that the children of PersistentUser are not deleted when using the JPQL query below. However, the children are deleted if I perform an entityManager.remove(object). Is this expected? Why doesn't the JPQL query below also perform a cascaded delete? @OneToMany(mappedBy = "persistentUser", cascade = CascadeType.ALL) private Collection<PersistentLogin> persistentLogins; ... @Override @Transactional public final void removeUserTokens(final String username) { final Query query = entityManager.createQuery( "DELETE FROM PersistentUser p WHERE username = :username"); query.setParameter("username", username); query.executeUpdate(); }

    Read the article

  • Error when loading YAML config files in Rails

    - by ZelluX
    I am configuring Rails with MongoDB, and find a strange problem when paring config/mongo.yml file. config/mongo.yml is generated by executing script/rails generate mongo_mapper:config, and it looks like following: defaults: &defaults host: 127.0.0.1 port: 27017 development: <<: *defaults database: tc_web_development test: <<: *defaults database: tc_web_test From the config file we can see the objects development and test should both have a database field. But when it is parsed and loaded in config/initializers/mongo.db, config = YAML::load(File.read(Rails.root.join('config/mongo.yml'))) puts config.inspect MongoMapper.setup(config, Rails.env) the strange thing comes: the output of puts config.inspect is {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017}, "test"=>{"host"=>"127.0.0.1", "port"=>27017}} which does not contain database attribute. But when I execute the same statements in a plain ruby console, instead of using rails console, mongo.yml is parsed in a right way. {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_development"}, "test"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_test"}} I am wondering what may be the cause of this problem. Any ideas? Thanks.

    Read the article

  • MS Access CrossTab query - across 3 tables

    - by Prembo
    Hi, I have the following 3 tables: 1) Sweetness Table FruitIndex CountryIndex Sweetness 1 1 10 1 2 20 1 3 400 2 1 50 2 2 123 2 3 1 3 1 49 3 2 40 3 3 2 2) Fruit Name Table FruitIndex FruitName 1 Apple 2 Orange 3 Peaches 3) Country Name Table CountryIndex CountryName 1 UnitedStates 2 Canada 3 Mexico I'm trying to perform a CrossTab SQL query to end up with: Fruit\Country UnitedStates Canada Mexico Apple 10 20 400 Orange 50 123 1 Peaches 49 40 2 The challenging part is to label the rows/columns with the relevant names from the Name tables. I can use MS Access to design 2 queries, create the joins the fruit/country names table with the Sweetness table perform crosstab query However I'm having trouble doing this in a single query. I've attempted nesting the 1st query's SQL into the 2nd, but it doesn't seem to work. Unfortunately, my solution needs to be be wholly SQL, as it is an embedded SQL query (cannot rely on query designer in MS Access, etc.). Any help greatly appreciated. Prembo.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >