Search Results

Search found 22897 results on 916 pages for 'query processing'.

Page 321/916 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Do I need to use http redirect code 302 or 307?

    - by Iain Fraser
    I am working on a CMS that uses a search facility to output a list of content items. You can use this facility as a search engine, but in this instance I am using it to output the current month's Media Releases from an archive of all Media Releases. The default parameters for these "Data Lists" as they are called, don't allow you to specify "current month" or "current year" for publication date - only "last x days" or "from dateA to dateB". The search facility will accept querystring parameters though, so I intend to code around it like this: Page loads How many days into the current month are we? Do we have a query string that asks for a list including this many days? If no, redirect the client back to this page with the appropriate query-string included. If yes, allow the CMS to process the query Now here's the rub. Suppose the spider from your favourite search engine comes along and tries to index your main Media Releases page. If you were to use a 301 redirect to the default query page, the spider would assume the main page was defunct and choose to add the query page to its index instead of the main page. Now I see that 302 and 307 indicate that a page has been moved temporarily; if I do this, are spiders likely to pop the main page into their index like I want them to? Thanks very much in advance for your help and advice. Kind regards Iain

    Read the article

  • How to pass a value from a method to property procedure in c#?

    - by sameer
    Here is my code: The jewellery class is my main class in which i am inheriting a connection string class. class Jewellery : Connectionstr { string lmcode; public string LM_code/**/Here i want to access the value of the method ReadData i.e displaystring and i want to store this value in the insert query below.** { get { return lmcode; } set { lmcode = value; } } string mname; public string M_Name { get { return mname; } set { mname = value; } } string desc; public string Desc { get { return desc; } set { desc = value; } } public string ReadData() { OleDbDataReader dr; string jid = string.Empty; string displayString = string.Empty; String query = "select max(LM_code)from Master_Accounts"; Datamanager.RunExecuteReader(Constr, query); if (dr.Read()) { jid = dr[0].ToString(); if (string.IsNullOrEmpty(jid)) { jid = "AM0000"; } int len = jid.Length; string split = jid.Substring(2, len - 2); int num = Convert.ToInt32(split); num++; displayString = jid.Substring(0, 2) + num.ToString("0000"); dr.Close(); } **return displayString;** I want to pass this value to the above property procedure above i.e LM_code. } public void add() { String query ="insert into Master_Accounts values ('" + LM_code + "','" + M_Name + "'," + "'" + Desc + "')"; Datamanager.RunExecuteNonQuery(Constr , query);// } If possible can u edit this code! Anticipated thanks by sameer

    Read the article

  • POP Culture

    - by [email protected]
    When we hear the word POP, we normally think of a soft drink, or a soda, while for others, it might be their favourite kind of music. In my case, it's the sound my knee makes when I bend down. Within Oracle though, when we talk about POP, we are referring to the Partner Ordering Portal. The Partner Ordering Portal, or POP as we like to call it, provides AutoVue Partners with a method to submit their orders online. POP offers Partners with up-to-date pricing and licensing information, efficient order processing, as most data is validated on screen, thereby reducing errors and enabling faster processing and, online order status and tracking. POP is not yet available in every country, but it is available in most. Click here to check out the POP home page (OPN Login information required) to see if your country of business is eligible to use POP and, for access to creating an account, watching instructional training viewlets, etc.

    Read the article

  • using group_concat in PHPMYADMIN will show the result as [BLOB - 3B]

    - by Itay Moav
    I have a query which uses the GROUP_CONCAT of mysql on an integer field. I am using PHPMYADMIN to develop this query. My problem that instead of showing 1,2 which is the result of the concatenated field, I get [BLOB - 3B]. Query is SELECT rec_id,GROUP_CONCAT(user_id) FROM t1 GROUP BY rec_id (both fields are unsigned int, both are not unique) What should I add to see the actual results?

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • libnotfy1(>= 0.4.4) and libnotify1-gtk2.10 for Ubuntu 11.10

    - by Aamir
    I have recently downloaded the latest version of Lotus Symphony 3.0.1 but whenever I try to install it, it shows the following dependencies. Can anybody tell where I can find these dependencies. dpkg: dependency problems prevent configuration of symphony: symphony depends on **libnotify1 (>= 0.4.4)**; however: Package libnotify1 is not installed. symphony depends on **libnotify1-gtk2.10**; however: Package libnotify1-gtk2.10 is not installed. dpkg: error processing symphony (--install): dependency problems - leaving unconfigured Errors were encountered while processing: symphony

    Read the article

  • Merge and match oracle

    - by Dante
    I really need some help with my query. I am trying to merge two tables together, but I only want the data were Cast_Date and Sched_Cast_Date are the same. I try to run the query but I get the error missing keyword in the line 21 column 13. I am sure that this is not the only potential error that I have. Could someone help me to get this query up and running? Below is the query that I am running. merge into Dante5 d5 using (SELECT bbp.subcar treadwell, bbp.BATCH_ID batch_id, bcs.SILICON silicon, bcs.SULPHUR sulphur, bcs.MANGANESE manganese, bcs.PHOSPHORUS phosphorus, bofcs.temperature temperature, to_char(bbp.START_POUR, 'dd-MON-yy hh24:MI') start_pour, to_char(bbp.END_POUR, 'dd-MON-yy hh24:MI') end_pour, to_char(bbp.sched_cast_date, 'dd-mon-yy hh24:mi') Sched_cast_date FROM bof_chem_sample bcs, bof_batch_pour bbp, bof_celox_sample bofcs WHERE bcs.SAMPLE_CODE= to_char('D1') and bofcs.sample_code=bcs.sample_code and bofcs.batch_id=bcs.batch_id and bcs.batch_id = bbp.batch_id and bofcs.temperature0 AND bbp.START_POUR=to_DATE('01012011000000','ddMmyyyyHH24MISS') and bbp.sched_cast_date<=sysdate)d3 ON (d3.sched_cast_date=d5.sched_cast_date) when matched then delete where (d5 sched_cast_date=to_date('18012011','ddmmyyyy')) when not matched then update set d5=batch_id='99999'

    Read the article

  • Next in Concurrency

    - by Jatin
    For past year I have been working a lot on concurrency in Java and have build and worked on many concurrent packages. So in terms of development in the concurrent world, I am quite confident. Further I am very much interested to learn and understand more about concurrent programming. But I am unable to answer myself what next? What extra should I learn or work on to inherit more skills related to Multi-core processing. If there is any nice book (read and enjoyed 'concurrency in practice' and 'concurrent programming in java') or resource's related to Multi-core processing so that I can go to the next level?

    Read the article

  • How can arguments to variadic functions be passed by reference in PHP?

    - by outis
    Assuming it's possible, how would one pass arguments by reference to a variadic function without generating a warning in PHP? We can no longer use the '&' operator in a function call, otherwise I'd accept that (even though it would be error prone, should a coder forget it). What inspired this is are old MySQLi wrapper classes that I unearthed (these days, I'd just use PDO). The only difference between the wrappers and the MySQLi classes is the wrappers throw exceptions rather than returning FALSE. class DBException extends RuntimeException {} ... class MySQLi_throwing extends mysqli { ... function prepare($query) { $stmt = parent::prepare($query); if (!$stmt) { throw new DBException($this->error, $this->errno); } return new MySQLi_stmt_throwing($this, $query, $stmt); } } // I don't remember why I switched from extension to composition, but // it shouldn't matter for this question. class MySQLi_stmt_throwing /* extends MySQLi_stmt */ { protected $_link, $_query, $_delegate; public function __construct($link, $query, $prepared) { //parent::__construct($link, $query); $this->_link = $link; $this->_query = $query; $this->_delegate = $prepared; } function bind_param($name, &$var) { return $this->_delegate->bind_param($name, $var); } function __call($name, $args) { //$rslt = call_user_func_array(array($this, 'parent::' . $name), $args); $rslt = call_user_func_array(array($this->_delegate, $name), $args); if (False === $rslt) { throw new DBException($this->_link->error, $this->errno); } return $rslt; } } The difficulty lies in calling methods such as bind_result on the wrapper. Constant-arity functions (e.g. bind_param) can be explicitly defined, allowing for pass-by-reference. bind_result, however, needs all arguments to be pass-by-reference. If you call bind_result on an instance of MySQLi_stmt_throwing as-is, the arguments are passed by value and the binding won't take. try { $id = Null; $stmt = $db->prepare('SELECT id FROM tbl WHERE ...'); $stmt->execute() $stmt->bind_result($id); // $id is still null at this point ... } catch (DBException $exc) { ... } Since the above classes are no longer in use, this question is merely a matter of curiosity. Alternate approaches to the wrapper classes are not relevant. Defining a method with a bunch of arguments taking Null default values is not correct (what if you define 20 arguments, but the function is called with 21?). Answers don't even need to be written in terms of MySQL_stmt_throwing; it exists simply to provide a concrete example.

    Read the article

  • Depends libntfs10 error while installing testdisk

    - by user260223
    I want to install testdisk to ubuntu 10.04 LTS but i'm getting an error. Any help? Here is the output: # sudo apt-get install testdisk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: testdisk: Depends: libntfs10 (>= 2.0.0) but it is not installable E: Broken packages I also tried: wget http://launchpadlibrarian.net/40386584/libntfs10_2.0.0-1ubuntu4_i386.deb; sudo dpkg -i *.deb And i get this error: dpkg: error processing libntfs10_2.0.0-1ubuntu4_i386.deb (--install): package architecture (i386) does not match system (amd64) Errors were encountered while processing: libntfs10_2.0.0-1ubuntu4_i386.deb

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • dataset not getting all the resultant tables i.e multiple tables are not being displayed in dataset

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • Telnet connection using c#

    - by alejandrobog
    Our office currently uses telnet to query an external server. The procedure is something like this. Connect - telnet opent 128........ 25000 Query - we paste the query and then hit alt + 019 Response - We receive the response as text in the telnet window So I’m trying to make this queries automatic using a c# app. My code is the following First the connection. (No exceptions) SocketClient = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); String szIPSelected = txtIPAddress.Text; String szPort = txtPort.Text; int alPort = System.Convert.ToInt16(szPort, 10); System.Net.IPAddress remoteIPAddress = System.Net.IPAddress.Parse(szIPSelected); System.Net.IPEndPoint remoteEndPoint = new System.Net.IPEndPoint(remoteIPAddress, alPort); SocketClient.Connect(remoteEndPoint); Then I send the query (No exceptions) string data ="some query"; byte[] byData = System.Text.Encoding.ASCII.GetBytes(data); SocketClient.Send(byData); Then I try to receive the response byte[] buffer = new byte[10]; Receive(SocketClient, buffer, 0, buffer.Length, 10000); string str = Encoding.ASCII.GetString(buffer, 0, buffer.Length); txtDataRx.Text = str; public static void Receive(Socket socket, byte[] buffer, int offset, int size, int timeout) { int startTickCount = Environment.TickCount; int received = 0; // how many bytes is already received do { if (Environment.TickCount > startTickCount + timeout) throw new Exception("Timeout."); try { received += socket.Receive(buffer, offset + received, size - received, SocketFlags.None); } catch (SocketException ex) { if (ex.SocketErrorCode == SocketError.WouldBlock || ex.SocketErrorCode == SocketError.IOPending || ex.SocketErrorCode == SocketError.NoBufferSpaceAvailable) { // socket buffer is probably empty, wait and try again Thread.Sleep(30); } else throw ex; // any serious error occurr } } while (received < size); } Every time I try to receive the response I get "an exsiting connetion has forcibly closed by the remote host" if open telnet and send the same query I get a response right away Any ideas, or suggestions?

    Read the article

  • working with arrays

    - by user295189
    I currently do a query which goes through the records and forms an array. the print_r on query gives me this print_r($query) yields the following: Array ( [0] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = Test comment ) [1] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = comment here ) [2] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = checking ) [3] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = testing ) [4] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = working ) ) somehow I want to take this array and convert it back to php. So for example some thing like this $myArray = array( ...) then $myArray should yield the samething as the print_r($query) yeilds. Thanks

    Read the article

  • Python form POST using urllib2 (also question on saving/using cookies)

    - by morpheous
    I am trying to write a function to post form data and save returned cookie info in a file so that the next time the page is visited, the cookie information is sent to the server (i.e. normal browser behavior). I wrote this relatively easily in C++ using curlib, but have spent almost an entire day trying to write this in Python, using urllib2 - and still no success. This is what I have so far: import urllib, urllib2 import logging # the path and filename to save your cookies in COOKIEFILE = 'cookies.lwp' cj = None ClientCookie = None cookielib = None logger = logging.getLogger(__name__) # Let's see if cookielib is available try: import cookielib except ImportError: logger.debug('importing cookielib failed. Trying ClientCookie') try: import ClientCookie except ImportError: logger.debug('ClientCookie isn\'t available either') urlopen = urllib2.urlopen Request = urllib2.Request else: logger.debug('imported ClientCookie succesfully') urlopen = ClientCookie.urlopen Request = ClientCookie.Request cj = ClientCookie.LWPCookieJar() else: logger.debug('Successfully imported cookielib') urlopen = urllib2.urlopen Request = urllib2.Request # This is a subclass of FileCookieJar # that has useful load and save methods cj = cookielib.LWPCookieJar() login_params = {'name': 'anon', 'password': 'pass' } def login(theurl, login_params): init_cookies(); data = urllib.urlencode(login_params) txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'} try: # create a request object req = Request(theurl, data, txheaders) # and open it to return a handle on the url handle = urlopen(req) except IOError, e: log.debug('Failed to open "%s".' % theurl) if hasattr(e, 'code'): log.debug('Failed with error code - %s.' % e.code) elif hasattr(e, 'reason'): log.debug("The error object has the following 'reason' attribute :"+e.reason) sys.exit() else: if cj is None: log.debug('We don\'t have a cookie library available - sorry.') else: print 'These are the cookies we have received so far :' for index, cookie in enumerate(cj): print index, ' : ', cookie # save the cookies again cj.save(COOKIEFILE) #return the data return handle.read() # FIXME: I need to fix this so that it takes into account any cookie data we may have stored def get_page(*args, **query): if len(args) != 1: raise ValueError( "post_page() takes exactly 1 argument (%d given)" % len(args) ) url = args[0] query = urllib.urlencode(list(query.iteritems())) if not url.endswith('/') and query: url += '/' if query: url += "?" + query resource = urllib.urlopen(url) logger.debug('GET url "%s" => "%s", code %d' % (url, resource.url, resource.code)) return resource.read() When I attempt to log in, I pass the correct username and pwd,. yet the login fails, and no cookie data is saved. My two questions are: can anyone see whats wrong with the login() function, and how may I fix it? how may I modify the get_page() function to make use of any cookie info I have saved ?

    Read the article

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

  • Which programming language for text editing?

    - by Ali
    I need a programming language for text editing and processing (replace, formatting, regex, string comparison, word processing, text analysis, etc). Which programming language is more powerful and has more functions for this purpose? Since I work PHP for my web projects, I currently use PHP; but the fact is that PHP is a scripting language for web applications, my current project is offline. I am curious if other programming languages such as Perl, Python, C, C++, Java, etc have more functionality for this purpose, and worth of shifting the project?

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Does anyone know how to appropriately deal with user timezones in rails 2.3?

    - by Amazing Jay
    We're building a rails app that needs to display dates (and more importantly, calculate them) in multiple timezones. Can anyone point me towards how to work with user timezones in rails 2.3(.5 or .8) The most inclusive article I've seen detailing how user time zones are supposed to work is here: http://wiki.rubyonrails.org/howtos/time-zones... although it is unclear when this was written or for what version of rails. Specifically it states that: "Time.zone - The time zone that is actually used for display purposes. This may be set manually to override config.time_zone on a per-request basis." Keys terms being "display purposes" and "per-request basis". Locally on my machine, this is true. However on production, neither are true. Setting Time.zone persists past the end of the request (to all subsequent requests) and also affects the way AR saves to the DB (basically treating any date as if it were already in UTC even when its not), thus saving completely inappropriate values. We run Ruby Enterprise Edition on production with passenger. If this is my problem, do we need to switch to JRuby or something else? To illustrate the problem I put the following actions in my ApplicationController right now: def test p_time = Time.now.utc s_time = Time.utc(p_time.year, p_time.month, p_time.day, p_time.hour) logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect logger.error p_time.inspect logger.error s_time.inspect jl = JunkLead.create! jl.date_at = s_time logger.error s_time.inspect logger.error jl.date_at.inspect jl.save! logger.error s_time.inspect logger.error jl.date_at.inspect render :nothing => true, :status => 200 end def test2 Time.zone = 'Mountain Time (US & Canada)' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end def test3 Time.zone = 'UTC' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end and they yield the following: Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:50) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:15:50 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 21ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test2 (for 98.202.196.203 at 2010-12-24 22:15:53) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Completed in 143ms (View: 1, DB: 3) | 200 OK [http://www.dealsthatmatter.com/test2] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:59) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Fri Dec 24 22:15:59 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Completed in 20ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test3 (for 98.202.196.203 at 2010-12-24 22:16:03) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Completed in 17ms (View: 0, DB: 2) | 200 OK [http://www.dealsthatmatter.com/test3] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:16:04) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:16:05 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 151ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] It should be clear above that the 2nd call to /test shows Time.zone set to Mountain, even though it shouldn't. Additionally, checking the database reveals that the test action when run after test2 saved a JunkLead record with a date of 2010-12-22 15:00:00, which is clearly wrong.

    Read the article

  • Login failed when a web service tries to communicate with SharePoint 2007

    - by tata9999
    Hi, I created a very simple webservice in ASP.NET 2.0 to query a list in SharePoint 2007 like this: namespace WebService1 { /// <summary> /// Summary description for Service1 /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class Service1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World"; } [WebMethod] public string ShowSPMyList() { string username = this.User.Identity.Name; return GetList(); } private string GetList() { string resutl = ""; SPSite siteCollection = new SPSite("http://localhost:89"); using (SPWeb web = siteCollection.OpenWeb()) { SPList mylist = web.Lists["MySPList"]; SPQuery query = new SPQuery(); query.Query = "<Where><Eq><FieldRef Name=\"AssignedTo\"/><Value Type=\"Text\">Ramprasad</Value></Eq></Where>"; SPListItemCollection items = mylist.GetItems(query); foreach (SPListItem item in items) { resutl = resutl + SPEncode.HtmlEncode(item["Title"].ToString()); } } return resutl; } } } This web service runs well when tested using the built-in server of Visual Studio 2008. The username indicates exactly my domain account (domain\myusername). However when I create a virtual folder to host and launch this web service (still located in the same machine with SP2007), I got the following error when invoking ShowSPMyList() method, at the line to execute OpenWeb(). These are the details of the error: System.Data.SqlClient.SqlException: Cannot open database "WSS_Content_8887ac57951146a290ca134778ddc3f8" requested by the login. The login failed. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. Does anyone have any idea why this error happens? Why does the web service run fine inside Visual Studio 2008, but not when running stand-alone? I checked and in both cases, the username variable has the same value (domain\myusername). Thank you very much.

    Read the article

  • How to fill dataset when sql is returning more than one table

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • Mysql-server-5.5 broken after update

    - by WalrusTusks
    Using Ubuntu 12.04, desktop, I had LAMP installed on my computer, and was using it as a server. However, after doing the upgrades one day, apt-get throws an error that mysql-server can't be configured, as it depends on another package: jay@rumbles:~$ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: mysql-server-5.5 : Depends: mysql-server-core-5.5 (= 5.5.22-0ubuntu1) but 5.5.24-0ubuntu0.12.04.1 is installed E: Unmet dependencies. Try using -f ` jay@rumbles:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/8,821 kB of archives. After this operation, 2,048 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of mysql-server-5.5: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.22-0ubuntu1); however: Version of mysql-server-core-5.5 on system is 5.5.24-0ubuntu0.12.04.1. dpkg: error processing mysql-server-5.5 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) How can I fix this?

    Read the article

  • error with slap.d while installing any new software

    - by ali haider
    I am trying to install wireshark (this issue is not specific to wireshark) on my ubuntu box and I keep getting the following error for slap.d: Setting up slapd (2.4.23-6ubuntu6.1) ... Creating initial configuration... mkdir: cannot create directory `/etc/ldap/slapd.d': File exists dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd Besides uninstalling or trying to update open LDAP or slap.d, is there any other action that can be taken to resolve this issue? I am trying the install as root user & I have tried moving the slap.d conf file so far but without any luck. Any thoughts on troubleshooting/resolving this issue will be quite welcome. thank in advance

    Read the article

  • make username and userid into array php

    - by Bharanikumar
    i want to my array , somthing like this manner array("userid"=>"username","1"=>"ganeshfriends","2"=>"tester") mysq query somthing like this $query = select username, userid from tbluser $result = mysql_query($query); while($row = mysql_fetch_array($result)){ $items = array($row['userid']=>$row['username']); } print_r($items); Can you tell me how to make userid as key and username as val... Thanks

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >