Search Results

Search found 22897 results on 916 pages for 'query processing'.

Page 322/916 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Issue Passing NSMutableDictionary to Method

    - by roswell
    Hello all, I'm trying some basic iPhone programming -- all has been going well but recently I've hit a bit of a roadblock. I've got a chunk of code that's passing an NSMutableDictionary (amongst other things) to a method in another class: [self.shuttle makeAPICallAndReturnResultsUsingMode:@"login" module:@"login" query:credentials]; The NSMutableArray credentials is previously defined a bit above as such: NSMutableDictionary *credentials = [[NSMutableDictionary alloc] init]; [credentials setObject:username forKey:@"username"]; [credentials setObject:password forKey:@"password"]; The method that receives it looks like such: -(id)makeAPICallAndReturnResultsUsingMode:(NSString *)mode module:(NSString *)module query:(NSMutableDictionary *)query From debugging I have determined that the code works fine up until this point within the above method: [query setObject:self.sessionID forKey:@"session_id"]; At this point, the application terminates -- the Console informs me of this exception: * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '* -[NSCFDictionary setObject:forKey:]: method sent to an uninitialized mutable dictionary object' This leads me to believe that I must initialize NSMutableDictionary in some way in my new method before I can access it, but I have no idea how. Any advice?

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Google Analytics API and Internal Search question

    - by Am
    I'm trying to use Google Analytics API to query internal searches that happen on my site. I'd like to be able to query the keywords and the number of times that keyword was used in internal search, based on URL of a page on the site. The idea is to find out which keywords direct the user to a particular page. Does anyone know which dimensions and metrics must use to query that information?

    Read the article

  • MySQL Connection Error in PHP

    - by user309381
    I have set the password for root and grant all privileges for root. Why does it say it is denied? ****mysql_query() [function.mysql-query]: Access denied for user 'SYSTEM'@'localhost' (using password: NO) in C:\wamp\www\photo_gallery\includes\database.php on line 56 Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in C:\wamp\www\photo_gallery\includes\database.php on line 56 The Query has problemAccess denied for user 'SYSTEM'@'localhost' (using password: NO) Code as follows: <?php include("DB_Info.php"); class MySQLDatabase { public $connection; function _construct() { $this->open_connection(); } public function open_connection() { /* $DB_SERVER = "localhost"; $DB_USER = "root"; $DB_PASS = ""; $DB_NAME = "photo_gallery";*/ $this->connection = mysql_connect($DBSERVER,$DBUSER,$DBPASS); if(!$this->connection) { die("Database Connection Failed" . mysql_error()); } else { $db_select = mysql_select_db($DBNAME,$this->connection); if(!$db_select) { die("Database Selection Failed" . mysql_error()); } } } function mysql_prep($value) { if (get_magic_quotes_gpc()) { $value = stripslashes($value); } // Quote if not a number if (!is_numeric($value)) { $value = "'" . mysql_real_escape_string($value) . "'"; } return $value; } public function close_connection() { if(isset($this->connection)) { mysql_close($this->connection); unset($this->connection); } } public function query($sql) { //$sql = "SELECT*FROM users where id = 1"; $result = mysql_query($sql); $this->confirm_query($result); //$found_user = mysql_fetch_assoc($result); //echo $found_user; return $found_user; } private function confirm_query($result) { if(!$result) { die("The Query has problem" . mysql_error()); } } } $database = new MySQLDatabase(); ?>

    Read the article

  • How to pass a value from a method to property procedure in c#?

    - by sameer
    Here is my code: The jewellery class is my main class in which i am inheriting a connection string class. class Jewellery : Connectionstr { string lmcode; public string LM_code/**/Here i want to access the value of the method ReadData i.e displaystring and i want to store this value in the insert query below.** { get { return lmcode; } set { lmcode = value; } } string mname; public string M_Name { get { return mname; } set { mname = value; } } string desc; public string Desc { get { return desc; } set { desc = value; } } public string ReadData() { OleDbDataReader dr; string jid = string.Empty; string displayString = string.Empty; String query = "select max(LM_code)from Master_Accounts"; Datamanager.RunExecuteReader(Constr, query); if (dr.Read()) { jid = dr[0].ToString(); if (string.IsNullOrEmpty(jid)) { jid = "AM0000"; } int len = jid.Length; string split = jid.Substring(2, len - 2); int num = Convert.ToInt32(split); num++; displayString = jid.Substring(0, 2) + num.ToString("0000"); dr.Close(); } **return displayString;** I want to pass this value to the above property procedure above i.e LM_code. } public void add() { String query ="insert into Master_Accounts values ('" + LM_code + "','" + M_Name + "'," + "'" + Desc + "')"; Datamanager.RunExecuteNonQuery(Constr , query);// } If possible can u edit this code! Anticipated thanks by sameer

    Read the article

  • Caching and cache invalidation in user controls?

    - by Rishabh Ohri
    HI, In our .aspx pages we have many user controls. each user control executes a sql query. The caching mechanism to be followed is to fragment cache each user control on the page and add the query dependency to the respective queries of the user controls. How to achieve query dependency on fragment cached data for invalidation?

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • Nonetype object has no attribute '__getitem__'

    - by adohertyd
    I am trying to use an API wrapper downloaded from the net to get results from the new azure Bing API. I'm trying to implement it as per the instructions but getting the runtime error: Traceback (most recent call last): File "bingwrapper.py", line 4, in <module> bingsearch.request("affirmative action") File "/usr/local/lib/python2.7/dist-packages/bingsearch-0.1-py2.7.egg/bingsearch.py", line 8, in request return r.json['d']['results'] TypeError: 'NoneType' object has no attribute '__getitem__' This is the wrapper code: import requests URL = 'https://api.datamarket.azure.com/Data.ashx/Bing/SearchWeb/Web?Query=%(query)s&$top=50&$format=json' API_KEY = 'SECRET_API_KEY' def request(query, **params): r = requests.get(URL % {'query': query}, auth=('', API_KEY)) return r.json['d']['results'] The instructions are: >>> import bingsearch >>> bingsearch.API_KEY='Your-Api-Key-Here' >>> r = bingsearch.request("Python Software Foundation") >>> r.status_code 200 >>> r[0]['Description'] u'Python Software Foundation Home Page. The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to ...' >>> r[0]['Url'] u'http://www.python.org/psf/ This is my code that uses the wrapper (as per the instructions): import bingsearch bingsearch.API_KEY='abcdefghijklmnopqrstuv' r = bingsearch.request("affirmative+action")

    Read the article

  • Double of Total Problem

    - by Gopal
    Table1 ID | WorkTime ----------------- 001 | 10:50:00 001 | 00:00:00 002 | .... WorkTime Datatype is *varchar(. SELECT ID, CONVERT(varchar(10), TotalSeconds1 / 3600) + ':' + RIGHT('00' + CONVERT(varchar(2), (TotalSeconds1 - TotalSeconds1 / 3600 * 3600) / 60), 2) + ':' + RIGHT('00' + CONVERT(varchar(2), TotalSeconds1 - (TotalSeconds1 / 3600 * 3600 + (TotalSeconds1 - TotalSeconds1 / 3600 * 3600) / 60 * 60)), 2) AS TotalWork From ( SELECT ID, SUM(DATEDIFF(second, CONVERT(datetime, '1/1/1900'), CONVERT(datetime, '1/1/1900 ' + WorkTime))) AS TotalSeconds1 FROM table1 group by ID) AS tab1 where id = '001' The above Query is showing "double the total of time" For Example From table1 i want to calculate the total WorkTime, when i run the above query it is showing ID WorkTime 001 21:40:00 002..., But it should show like this ID Worktime 001 10:50:00 ..., How to avoid the double total of worktime. How to modify my query. Need Query Help

    Read the article

  • SQLiteException and SQLite error near "(": syntax error with Subsonic ActiveRecord

    - by nvuono
    I ran into an interesting error with the following LiNQ query using LiNQPad and when using Subsonic 3.0.x w/ActiveRecord within my project and wanted to share the error and resolution for anyone else who runs into it. The linq statement below is meant to group entries in the tblSystemsValues collection into their appropriate system and then extract the system with the highest ID. from ksf in KeySafetyFunction where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystems on ksf.ID equals sys.KeySafetyFunction join xval in (from t in tblSystemsValues group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g=>g.ID), MaxText = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).Checked }) on sys.ID equals xval.sysId select new {KSFDesc=ksf.Description, sys.Description, xval.MaxText, xval.MaxChecked} On its own, the subquery for grouping into groupedT works perfectly and the query to match up KeySafetyFunctions with their System in tblSystems also works perfectly on its own. However, when trying to run the completed query in linqpad or within my project I kept running into a SQLiteException SQLite Error Near "(" First I tried splitting the queries up within my project because I knew that I could just run a foreach loop over the results if necessary. However, I continued to receive the same exception! I eventually separated the query into three separate parts before I realized that it was the lazy execution of the queries that was killing me. It then became clear that adding the .ToList() specifier after the myProtectedSystem query below was the key to avoiding the lazy execution after combining and optimizing the query and being able to get my results despite the problems I encountered with the SQLite driver. // determine the max Text/Checked values for each system in tblSystemsValue var myProtectedValue = from t in tblSystemsValue.All() group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g => g.ID), MaxText = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).Checked}; // get the system description information and filter by Unit/Condition ID var myProtectedSystem = (from ksf in KeySafetyFunction.All() where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystem.All() on ksf.ID equals sys.KeySafetyFunction select new {KSFDesc = ksf.Description, sys.Description, sys.ID}).ToList(); // finally join everything together AFTER forcing execution with .ToList() var joined = from protectedSys in myProtectedSystem join protectedVal in myProtectedValue on protectedSys.ID equals protectedVal.sysId select new {protectedSys.KSFDesc, protectedSys.Description, protectedVal.MaxChecked, protectedVal.MaxText}; // print the gratifying debug results foreach(var protectedItem in joined) { System.Diagnostics.Debug.WriteLine(protectedItem.Description + ", " + protectedItem.KSFDesc + ", " + protectedItem.MaxText + ", " + protectedItem.MaxChecked); }

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • Read alphanumeric characters from csv file in C#

    - by Prasad
    I am using the following code to read my csv file: public DataTable ParseCSV(string path) { if (!File.Exists(path)) return null; string full = Path.GetFullPath(path); string file = Path.GetFileName(full); string dir = Path.GetDirectoryName(full); //create the "database" connection string string connString = "Provider=Microsoft.ACE.OLEDB.12.0;" + "Data Source=\"" + dir + "\\\";" + "Extended Properties=\"text;HDR=Yes;FMT=Delimited;IMEX=1\""; //create the database query string query = "SELECT * FROM " + file; //create a DataTable to hold the query results DataTable dTable = new DataTable(); //create an OleDbDataAdapter to execute the query OleDbDataAdapter dAdapter = new OleDbDataAdapter(query, connString); //fill the DataTable dAdapter.Fill(dTable); dAdapter.Dispose(); return dTable; } But the above doesn't reads the alphanumeric value from the csv file. it reads only i either numeric or alpha. Whats the fix i need to make to read the alphanumeric values? Please suggest.

    Read the article

  • Zend_Paginator - Increase querys

    - by poru
    Hello, I started using Zend_Paginator, it works everything fine but I noticed that there is one more query which slows the load time down. The additional query: SELECT COUNT(1) AS `zend_paginator_row_count` FROM `content` The normal query: SELECT `content`.`id`, `content`.`name` FROM `content` LIMIT 2 PHP: $adapter = new Zend_Paginator_Adapter_DbSelect($table->select()->from($table, array('id', 'name'))); $paginator = new Zend_Paginator($adapter); Could I merge the two querys into one (for better performance)?

    Read the article

  • issue in list of dict

    - by gaggina
    class MyOwnClass: # list who contains the queries queries = [] # a template dict template_query = {} template_query['name'] = 'mat' template_query['age'] = '12' obj = MyOwnClass() query = obj.template_query query['name'] = 'sam' query['age'] = '23' obj.queries.append(query) query2 = obj.template_query query2['name'] = 'dj' query2['age'] = '19' obj.queries.append(query2) print obj.queries It gives me [{'age': '19', 'name': 'dj'}, {'age': '19', 'name': 'dj'}] while I expect to have [{'age': '23' , 'name': 'sam'}, {'age': '19', 'name': 'dj'}] I thought to use a template for this list because I'm gonna to use it very often and there are some default variable who does not need to be changed. Why does doing it the template_query itself changes? I'm new to python and I'm getting pretty confused.

    Read the article

  • error with slap.d while installing any new software

    - by ali haider
    I am trying to install wireshark (this issue is not specific to wireshark) on my ubuntu box and I keep getting the following error for slap.d: Setting up slapd (2.4.23-6ubuntu6.1) ... Creating initial configuration... mkdir: cannot create directory `/etc/ldap/slapd.d': File exists dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd Besides uninstalling or trying to update open LDAP or slap.d, is there any other action that can be taken to resolve this issue? I am trying the install as root user & I have tried moving the slap.d conf file so far but without any luck. Any thoughts on troubleshooting/resolving this issue will be quite welcome. thank in advance

    Read the article

  • Querying tables based on other column values

    - by blcArmadillo
    Is there a way to query different databases based on the value of a column in the query? Say for example you have the following columns: id part_id attr_id attr_value_ext attr_value_int You then run a query and if the attr_id is '1' is returns the attr_value_int column but if attr_id is greater than '1' it joins data from another table based on the attr_value_ext.

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • How do I convert tuple of tuples to list in one line (pythonic)?

    - by ThinkCode
    query = 'select mydata from mytable' cursor.execute(query) myoutput = cursor.fetchall() print myoutput (('aa',), ('bb',), ('cc',)) Why is it (cursor.fetchall) returning a tuple of tuples instead of a tuple since my query is asking for only one column of data? What is the best way of converting it to ['aa', 'bb', 'cc'] ? I can do something like this : mylist = [] myoutput = list(myoutput) for each in myoutput: mylist.append(each[0]) I am sure this isn't the best way of doing it. Please enlighten me!

    Read the article

  • Calling a protected Windows executable with Perl

    - by Jake
    I'm trying to write a perl script that determines which users are currently logged into Windows by using query.exe (c:\Windows\system32\query.exe). Perl is unable to access this file, unable to execute it, even unable to see that it exists, as I've found with the following code: print `dir c:\\windows\\system32\\query*`; This produces the following output: 07/13/2009 05:16 PM 1,363,456 Query.dll 1 File(s) 1,363,456 bytes 0 Dir(s) 183,987,658,752 bytes free I've checked the user executing the script using perl's getlogin function, and it returns the name of a member of the local Administrators group (specifically, me). I've also tried adding read/execute permissions for "Everyone", but windows keeps giving me access denied errors when I try to modify this file's permissions. Finally, I've tried running perl.exe as an administrator but that doesn't fix the problem either. Is this something I can solve by changing some settings in Windows? Do I need to add something to my perl script? Or is there just no way to grant perl access to some of these processes?

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • MySql scoping problem with correlated subqueries

    - by Rolf
    Hi, I'm having this Mysql query, It works: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = 37) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = 37) ) aSq ) areas FROM m3allems m WHERE m.id = 37 The result is: nom prenom categories areas Man Multi Carpentry,Paint,Walls Beirut,Baalbak,Saida It works correclty, but only when i hardcode into the query the id that I want (37). I want it to work for all entries in the m3allem table, so I try this: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = m.id) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = m.id) ) aSq ) areas FROM m3allems m And I get an error: Unknown column 'm.id' in 'where clause' Why? From the MySql manual: 13.2.8.7. Correlated Subqueries [...] Scoping rule: MySQL evaluates from inside to outside. So... do this not work when the subquery is in a SELECT section? I did not read anything about that. Does anyone know? What should I do? It took me a long time to build this query... I know it's a monster query but it gets what I want in a single query, and I am so close to getting it to work! Can anyone help?

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Invalid parameter number: number of bound variables does not match number of tokens

    - by Alex
    I have a table: 'objects' with few columns: object_id:int, object_type:int, object_status:int, object_lati:float, object_long:float My query is : $stmt = $db->query('SELECT o.object_id, o.object_type, o.object_status, o.object_lati, o.object_long FROM objects o WHERE o.object_id = 1'); $res = $stmt->fetch(); Pdo throw error: SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens When i remove column object_lati or object_long query is work fine.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >