Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 312/911 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • PHP mssql_query double quotes cannot be used

    - by Nilesh
    Hi all, In java-jdbc, I can easily run the following SQL (NOTE the double quotes around columns and table names) Select cus."customer_id" , cus."organisation_or_person" , cus."organisation_name" , cus."first_name" , cus."last_name" , cus."date_became_customer" , cus."other_customer_details" From "Contact_Management"."dbo"."Customers" cus But the same query in PHP errors out saying invalid syntax "Warning: mssql_query() [function.mssql-query]: message: Incorrect syntax near 'customer_id'. (severity 15) " But If remove all the double quotes, the query works fine and no errors. The query is ported from a java application so I would like to keep the double quotes and the SQL as it is. Any alternative solutions? Thank you Nilesh

    Read the article

  • Do I need to use http redirect code 302 or 307?

    - by Iain Fraser
    I am working on a CMS that uses a search facility to output a list of content items. You can use this facility as a search engine, but in this instance I am using it to output the current month's Media Releases from an archive of all Media Releases. The default parameters for these "Data Lists" as they are called, don't allow you to specify "current month" or "current year" for publication date - only "last x days" or "from dateA to dateB". The search facility will accept querystring parameters though, so I intend to code around it like this: Page loads How many days into the current month are we? Do we have a query string that asks for a list including this many days? If no, redirect the client back to this page with the appropriate query-string included. If yes, allow the CMS to process the query Now here's the rub. Suppose the spider from your favourite search engine comes along and tries to index your main Media Releases page. If you were to use a 301 redirect to the default query page, the spider would assume the main page was defunct and choose to add the query page to its index instead of the main page. Now I see that 302 and 307 indicate that a page has been moved temporarily; if I do this, are spiders likely to pop the main page into their index like I want them to? Thanks very much in advance for your help and advice. Kind regards Iain

    Read the article

  • using group_concat in PHPMYADMIN will show the result as [BLOB - 3B]

    - by Itay Moav
    I have a query which uses the GROUP_CONCAT of mysql on an integer field. I am using PHPMYADMIN to develop this query. My problem that instead of showing 1,2 which is the result of the concatenated field, I get [BLOB - 3B]. Query is SELECT rec_id,GROUP_CONCAT(user_id) FROM t1 GROUP BY rec_id (both fields are unsigned int, both are not unique) What should I add to see the actual results?

    Read the article

  • How to pass a value from a method to property procedure in c#?

    - by sameer
    Here is my code: The jewellery class is my main class in which i am inheriting a connection string class. class Jewellery : Connectionstr { string lmcode; public string LM_code/**/Here i want to access the value of the method ReadData i.e displaystring and i want to store this value in the insert query below.** { get { return lmcode; } set { lmcode = value; } } string mname; public string M_Name { get { return mname; } set { mname = value; } } string desc; public string Desc { get { return desc; } set { desc = value; } } public string ReadData() { OleDbDataReader dr; string jid = string.Empty; string displayString = string.Empty; String query = "select max(LM_code)from Master_Accounts"; Datamanager.RunExecuteReader(Constr, query); if (dr.Read()) { jid = dr[0].ToString(); if (string.IsNullOrEmpty(jid)) { jid = "AM0000"; } int len = jid.Length; string split = jid.Substring(2, len - 2); int num = Convert.ToInt32(split); num++; displayString = jid.Substring(0, 2) + num.ToString("0000"); dr.Close(); } **return displayString;** I want to pass this value to the above property procedure above i.e LM_code. } public void add() { String query ="insert into Master_Accounts values ('" + LM_code + "','" + M_Name + "'," + "'" + Desc + "')"; Datamanager.RunExecuteNonQuery(Constr , query);// } If possible can u edit this code! Anticipated thanks by sameer

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Merge and match oracle

    - by Dante
    I really need some help with my query. I am trying to merge two tables together, but I only want the data were Cast_Date and Sched_Cast_Date are the same. I try to run the query but I get the error missing keyword in the line 21 column 13. I am sure that this is not the only potential error that I have. Could someone help me to get this query up and running? Below is the query that I am running. merge into Dante5 d5 using (SELECT bbp.subcar treadwell, bbp.BATCH_ID batch_id, bcs.SILICON silicon, bcs.SULPHUR sulphur, bcs.MANGANESE manganese, bcs.PHOSPHORUS phosphorus, bofcs.temperature temperature, to_char(bbp.START_POUR, 'dd-MON-yy hh24:MI') start_pour, to_char(bbp.END_POUR, 'dd-MON-yy hh24:MI') end_pour, to_char(bbp.sched_cast_date, 'dd-mon-yy hh24:mi') Sched_cast_date FROM bof_chem_sample bcs, bof_batch_pour bbp, bof_celox_sample bofcs WHERE bcs.SAMPLE_CODE= to_char('D1') and bofcs.sample_code=bcs.sample_code and bofcs.batch_id=bcs.batch_id and bcs.batch_id = bbp.batch_id and bofcs.temperature0 AND bbp.START_POUR=to_DATE('01012011000000','ddMmyyyyHH24MISS') and bbp.sched_cast_date<=sysdate)d3 ON (d3.sched_cast_date=d5.sched_cast_date) when matched then delete where (d5 sched_cast_date=to_date('18012011','ddmmyyyy')) when not matched then update set d5=batch_id='99999'

    Read the article

  • How can arguments to variadic functions be passed by reference in PHP?

    - by outis
    Assuming it's possible, how would one pass arguments by reference to a variadic function without generating a warning in PHP? We can no longer use the '&' operator in a function call, otherwise I'd accept that (even though it would be error prone, should a coder forget it). What inspired this is are old MySQLi wrapper classes that I unearthed (these days, I'd just use PDO). The only difference between the wrappers and the MySQLi classes is the wrappers throw exceptions rather than returning FALSE. class DBException extends RuntimeException {} ... class MySQLi_throwing extends mysqli { ... function prepare($query) { $stmt = parent::prepare($query); if (!$stmt) { throw new DBException($this->error, $this->errno); } return new MySQLi_stmt_throwing($this, $query, $stmt); } } // I don't remember why I switched from extension to composition, but // it shouldn't matter for this question. class MySQLi_stmt_throwing /* extends MySQLi_stmt */ { protected $_link, $_query, $_delegate; public function __construct($link, $query, $prepared) { //parent::__construct($link, $query); $this->_link = $link; $this->_query = $query; $this->_delegate = $prepared; } function bind_param($name, &$var) { return $this->_delegate->bind_param($name, $var); } function __call($name, $args) { //$rslt = call_user_func_array(array($this, 'parent::' . $name), $args); $rslt = call_user_func_array(array($this->_delegate, $name), $args); if (False === $rslt) { throw new DBException($this->_link->error, $this->errno); } return $rslt; } } The difficulty lies in calling methods such as bind_result on the wrapper. Constant-arity functions (e.g. bind_param) can be explicitly defined, allowing for pass-by-reference. bind_result, however, needs all arguments to be pass-by-reference. If you call bind_result on an instance of MySQLi_stmt_throwing as-is, the arguments are passed by value and the binding won't take. try { $id = Null; $stmt = $db->prepare('SELECT id FROM tbl WHERE ...'); $stmt->execute() $stmt->bind_result($id); // $id is still null at this point ... } catch (DBException $exc) { ... } Since the above classes are no longer in use, this question is merely a matter of curiosity. Alternate approaches to the wrapper classes are not relevant. Defining a method with a bunch of arguments taking Null default values is not correct (what if you define 20 arguments, but the function is called with 21?). Answers don't even need to be written in terms of MySQL_stmt_throwing; it exists simply to provide a concrete example.

    Read the article

  • Telnet connection using c#

    - by alejandrobog
    Our office currently uses telnet to query an external server. The procedure is something like this. Connect - telnet opent 128........ 25000 Query - we paste the query and then hit alt + 019 Response - We receive the response as text in the telnet window So I’m trying to make this queries automatic using a c# app. My code is the following First the connection. (No exceptions) SocketClient = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); String szIPSelected = txtIPAddress.Text; String szPort = txtPort.Text; int alPort = System.Convert.ToInt16(szPort, 10); System.Net.IPAddress remoteIPAddress = System.Net.IPAddress.Parse(szIPSelected); System.Net.IPEndPoint remoteEndPoint = new System.Net.IPEndPoint(remoteIPAddress, alPort); SocketClient.Connect(remoteEndPoint); Then I send the query (No exceptions) string data ="some query"; byte[] byData = System.Text.Encoding.ASCII.GetBytes(data); SocketClient.Send(byData); Then I try to receive the response byte[] buffer = new byte[10]; Receive(SocketClient, buffer, 0, buffer.Length, 10000); string str = Encoding.ASCII.GetString(buffer, 0, buffer.Length); txtDataRx.Text = str; public static void Receive(Socket socket, byte[] buffer, int offset, int size, int timeout) { int startTickCount = Environment.TickCount; int received = 0; // how many bytes is already received do { if (Environment.TickCount > startTickCount + timeout) throw new Exception("Timeout."); try { received += socket.Receive(buffer, offset + received, size - received, SocketFlags.None); } catch (SocketException ex) { if (ex.SocketErrorCode == SocketError.WouldBlock || ex.SocketErrorCode == SocketError.IOPending || ex.SocketErrorCode == SocketError.NoBufferSpaceAvailable) { // socket buffer is probably empty, wait and try again Thread.Sleep(30); } else throw ex; // any serious error occurr } } while (received < size); } Every time I try to receive the response I get "an exsiting connetion has forcibly closed by the remote host" if open telnet and send the same query I get a response right away Any ideas, or suggestions?

    Read the article

  • Problem installing ubuntu touch on galaxy nexus

    - by Francesco
    I've installed ubuntu touch on my galaxy nexus following the tutorial on the official site. However the latter is not so clear.. In particular, during the installation, user action on the phone is requested and not documented on the tutorial: 1) The phone asked me whether rebooting, wiping the cache or something else (i did nothing and the phone rebooted) 2) The phone asked me whether replacing or not cmw (or something similar). I asked no.. After the installation all seemed to work correctly. However after shutting down the phone can't power on anymore... When I push the power button the battery icon appears, showing that the battery is completely charged. What am I supposed to do?

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • dataset not getting all the resultant tables i.e multiple tables are not being displayed in dataset

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • Building Private IaaS with SPARC and Oracle Solaris

    - by ferhat
    A superior enterprise cloud infrastructure with high performing systems using built-in virtualization! We are happy to announce the expansion of Oracle Optimized Solution for Enterprise Cloud Infrastructure with Oracle's SPARC T-Series servers and Oracle Solaris.  Designed, tuned, tested and fully documented, the Oracle Optimized Solution for Enterprise Cloud Infrastructure now offers customers looking to upgrade, consolidate and virtualize their existing SPARC-based infrastructure a proven foundation for private cloud-based services which can lower TCO by up to 81 percent(1). Faster time to service, reduce deployment time from weeks to days, and can increase system utilization to 80 percent. The Oracle Optimized Solution for Enterprise Cloud Infrastructure can also be deployed at up to 50 percent lower cost over five years than comparable alternatives(2). The expanded solution announced today combines Oracle’s latest SPARC T-Series servers; Oracle Solaris 11, the first cloud OS; Oracle VM Server for SPARC, Oracle’s Sun ZFS Storage Appliance, and, Oracle Enterprise Manager Ops Center 12c, which manages all Oracle system technologies, streamlining cloud infrastructure management. Thank you to all who stopped by Oracle booth at the CloudExpo Conference in New York. We were also at Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, discussing how this solution can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. Designed, tuned, and tested, Oracle Optimized Solution for Enterprise Cloud Infrastructure is a complete cloud infrastructure or any virtualized environment  using the proven documented best practices for deployment and optimization. The solution addresses each layer of the infrastructure stack using Oracle's powerful SPARC T-Series as well as x86 servers with storage, network, virtualization, and management configurations to provide a robust, flexible, and balanced foundation for your enterprise applications and databases.  For more information visit Oracle Optimized Solution for Enterprise Cloud Infrastructure. Solution Brief: Accelerating Enterprise Cloud Infrastructure Deployments White Paper: Reduce Complexity and Accelerate Enterprise Cloud Infrastructure Deployments Technical White Paper: Enterprise Cloud Infrastructure on SPARC (1) Comparison based on current SPARC server customers consolidating existing installations including Sun Fire E4900, Sun Fire V440 and SPARC Enterprise T5240 servers to latest generation SPARC T4 servers. Actual deployments and configurations will vary. (2) Comparison based on solution with SPARC T4-2 servers with Oracle Solaris and Oracle VM Server for SPARC versus HP ProLiant DL380 G7 with VMware and Red Hat Enterprise Linux and IBM Power 720 Express - Power 730 Express with IBM AIX Enterprise Edition and Power VM.

    Read the article

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

  • working with arrays

    - by user295189
    I currently do a query which goes through the records and forms an array. the print_r on query gives me this print_r($query) yields the following: Array ( [0] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = Test comment ) [1] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = comment here ) [2] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = checking ) [3] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = testing ) [4] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = working ) ) somehow I want to take this array and convert it back to php. So for example some thing like this $myArray = array( ...) then $myArray should yield the samething as the print_r($query) yeilds. Thanks

    Read the article

  • Python form POST using urllib2 (also question on saving/using cookies)

    - by morpheous
    I am trying to write a function to post form data and save returned cookie info in a file so that the next time the page is visited, the cookie information is sent to the server (i.e. normal browser behavior). I wrote this relatively easily in C++ using curlib, but have spent almost an entire day trying to write this in Python, using urllib2 - and still no success. This is what I have so far: import urllib, urllib2 import logging # the path and filename to save your cookies in COOKIEFILE = 'cookies.lwp' cj = None ClientCookie = None cookielib = None logger = logging.getLogger(__name__) # Let's see if cookielib is available try: import cookielib except ImportError: logger.debug('importing cookielib failed. Trying ClientCookie') try: import ClientCookie except ImportError: logger.debug('ClientCookie isn\'t available either') urlopen = urllib2.urlopen Request = urllib2.Request else: logger.debug('imported ClientCookie succesfully') urlopen = ClientCookie.urlopen Request = ClientCookie.Request cj = ClientCookie.LWPCookieJar() else: logger.debug('Successfully imported cookielib') urlopen = urllib2.urlopen Request = urllib2.Request # This is a subclass of FileCookieJar # that has useful load and save methods cj = cookielib.LWPCookieJar() login_params = {'name': 'anon', 'password': 'pass' } def login(theurl, login_params): init_cookies(); data = urllib.urlencode(login_params) txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'} try: # create a request object req = Request(theurl, data, txheaders) # and open it to return a handle on the url handle = urlopen(req) except IOError, e: log.debug('Failed to open "%s".' % theurl) if hasattr(e, 'code'): log.debug('Failed with error code - %s.' % e.code) elif hasattr(e, 'reason'): log.debug("The error object has the following 'reason' attribute :"+e.reason) sys.exit() else: if cj is None: log.debug('We don\'t have a cookie library available - sorry.') else: print 'These are the cookies we have received so far :' for index, cookie in enumerate(cj): print index, ' : ', cookie # save the cookies again cj.save(COOKIEFILE) #return the data return handle.read() # FIXME: I need to fix this so that it takes into account any cookie data we may have stored def get_page(*args, **query): if len(args) != 1: raise ValueError( "post_page() takes exactly 1 argument (%d given)" % len(args) ) url = args[0] query = urllib.urlencode(list(query.iteritems())) if not url.endswith('/') and query: url += '/' if query: url += "?" + query resource = urllib.urlopen(url) logger.debug('GET url "%s" => "%s", code %d' % (url, resource.url, resource.code)) return resource.read() When I attempt to log in, I pass the correct username and pwd,. yet the login fails, and no cookie data is saved. My two questions are: can anyone see whats wrong with the login() function, and how may I fix it? how may I modify the get_page() function to make use of any cookie info I have saved ?

    Read the article

  • Login failed when a web service tries to communicate with SharePoint 2007

    - by tata9999
    Hi, I created a very simple webservice in ASP.NET 2.0 to query a list in SharePoint 2007 like this: namespace WebService1 { /// <summary> /// Summary description for Service1 /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class Service1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World"; } [WebMethod] public string ShowSPMyList() { string username = this.User.Identity.Name; return GetList(); } private string GetList() { string resutl = ""; SPSite siteCollection = new SPSite("http://localhost:89"); using (SPWeb web = siteCollection.OpenWeb()) { SPList mylist = web.Lists["MySPList"]; SPQuery query = new SPQuery(); query.Query = "<Where><Eq><FieldRef Name=\"AssignedTo\"/><Value Type=\"Text\">Ramprasad</Value></Eq></Where>"; SPListItemCollection items = mylist.GetItems(query); foreach (SPListItem item in items) { resutl = resutl + SPEncode.HtmlEncode(item["Title"].ToString()); } } return resutl; } } } This web service runs well when tested using the built-in server of Visual Studio 2008. The username indicates exactly my domain account (domain\myusername). However when I create a virtual folder to host and launch this web service (still located in the same machine with SP2007), I got the following error when invoking ShowSPMyList() method, at the line to execute OpenWeb(). These are the details of the error: System.Data.SqlClient.SqlException: Cannot open database "WSS_Content_8887ac57951146a290ca134778ddc3f8" requested by the login. The login failed. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. Does anyone have any idea why this error happens? Why does the web service run fine inside Visual Studio 2008, but not when running stand-alone? I checked and in both cases, the username variable has the same value (domain\myusername). Thank you very much.

    Read the article

  • Two Wifi Icons in Panel [Solved]

    - by Alex
    I have the exact problem in 13.10 as this user Two Wifi indicators in panel. Here are some screenshots: Here are some screenshots from another user: http://ubuntuforums.org/showthread.php?t=2183020&p=12825563 ifconfig and iwconfig outputs $ ifconfig lo Link encap:Local Loopback inet addr:XXXXXX Mask:XXXXXXX inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2243 errors:0 dropped:0 overruns:0 frame:0 TX packets:2243 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:209889 (209.8 KB) TX bytes:209889 (209.8 KB) wlan0 Link encap:Ethernet HWaddr XXXXXXXXX inet addr:XXXXXX Bcast:XXXXXXXX Mask:XXXXXXX inet6 addr: XXXXXXX Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5925 errors:0 dropped:0 overruns:0 frame:0 TX packets:3361 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2951818 (2.9 MB) TX bytes:630579 (630.5 KB) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"XXXXX" Mode:Managed Frequency:2.437 GHz Access Point: XXXXXXXX Bit Rate=72.2 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=49/70 Signal level=-61 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:153 Invalid misc:472 Missed beacon:0

    Read the article

  • How to fill dataset when sql is returning more than one table

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • CodePlex Daily Summary for Saturday, November 09, 2013

    CodePlex Daily Summary for Saturday, November 09, 2013Popular ReleasesCoolpy: CoolpyI: Coolpy???,????rom??????window phone????????????。???????????Praxis2: Especificaciones de Casos de Uso Iteracción 1: Especificaciones de Casos de Uso Iteracción 1 Responsables Anderson CU Buscar Obra CU Registrar Obra CU Registrar Alquiler Juan Victor CU Buscar Cliente CU Registrar Cliente CU Registrar EntregaMedia Companion: Media Companion MC3.586b: Tv - Multi-episodes restored to MCThere's been a plenty of bug fixes occuring lately, with IMDB changing their info, and some great feed-back by users. But Thanks to Billyad2000, Multi-episodes, are now displaying correctly in Media Companion, complete with all functionality. This was a hard effort, with more than a few dev's in the past having looked at this code to get it working. But, like a light-bulb going off, Billy's managed to massage the code, and restore this much missed function...Dynamics AX 2012 R2 Kitting: AX 2012 R2 CU7 release of Kitting: Here is the AX 2012 R2 CU7 release of kitting. Released both as a XPO and a model.PantheR's GraphX for .NET: GraphX for .NET RELEASE v1.0.1: PLEASE RATE THIS RELEASE IF YOU LIKED IT! THANKS! :) RELEASE 1.0.1 + Changed ExportToImage() parameters: added useZoomControlSurface param that enables zoom control parent visual space to be used for export instead whole GraphArea panel. Using this technique it is possible to export graphs with negative vertices coordinates. + Added common interface IZoomControl for all included Zoom controls + Added new method GraphArea.GenerateGraph() that accepts only optional parameters and will use in...ConEmu - Windows console with tabs: ConEmu 131107 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe"Team Foundation Server Upgrade Guide: v3 - TFS 2013 Upgrade Guide: Welcome to the Team Foundation Server Upgrade Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Known issues NoneUpgrading SharePoint section is not included yet. Independent technical review is pending.VidCoder: 1.5.12 Beta: Added an option to preserve Created and Last Modified times when converting files. In Options -> Advanced. Added an option to mark an automatically selected subtitle track as "Default". Updated HandBrake core to SVN 5878. Fixed auto passthrough not applying just after switching to it. Fixed bug where preset/profile/tune could disappear when reverting a preset.Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2013.9.25): XrmToolbox improvement Correct changing connection from the status dropdown Tools improvement Updated tool Audit Center (v1.2013.9.10) -> Publish entities Iconator (v1.2013.9.27) -> Optimized asynchronous loading of images and entities MetadataDocumentGenerator (v1.2013.11.6) -> Correct system entities reading with incorrect attribute type Script Manager (v1.2013.9.27) -> Retrieve only custom events SiteMapEditor (v1.2013.11.7) -> Reset of CRM 2013 SiteMap ViewLayoutReplicator (v1.201...Microsoft SQL Server Product Samples: Database: SQL Server 2014 CTP2 In-Memory OLTP Sample, based: This sample showcases the new In-Memory OLTP feature, which is part of SQL Server 2014 CTP2. It shows the new memory-optimized tables and natively-compiled stored procedures, and can be used to show the performance benefit of in-memory OLTP. Installation instructions for the sample are included in the file ‘awinmemsample.doc’, which is part of the download. You can ask a question about this sample at the SQL Server Samples Forum Composite C1 CMS - Open Source on .NET: Composite C1 4.1: Composite C1 4.1 (4.1.5058.34326) Write a review for this release - help us improve, recommend us. Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.1 The following are highlights of major changes since Composite C1 4.0: General user features: Drag-and-drop images and files like PDF and Word directly from own your desktop and folders into page content Allow you to install Composite Form Builder ...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Home Access Plus+: v9.7: Updated: JSON.net Fixed: Issue with the Windows 8 App Added: Windows 8.1 App Added: Win: Self Signed HAP+ Install Support Added: Win: Delete File Support Added: Timeout for the Logon Tracker Removed: Error Dialogs on the User Card Fixed: Green line showing over the booking form Note: a web.config file update is requiredWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net Visual Studio Runner: A placeholder for downloading Visual Studio runner VSIX files, in case the Gallery is down (or you want to downgrade to older versions).VeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Duplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerVG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesNew ProjectsAppBootloader: ???CS????????????? Let your C\S program more flexible for automatic updatesArduino Visual Studio: Purpose of this project is to demonstrate using Visual Studio 2012 with Atmel chip on a Arduino UNO board.ASP.NET Identity: ASP.NET Identity is the new membership system for building ASP.NET web applications. ASP.NET Identity allows you to add login features to your application and mAsset Maintenance Management Console: A.M.M.C is an attempt at creating an extremely versatile interface tool to maintain assets.AX 2012 R2 SYNC: SYNC for AX 2012 introduces a centralized company concept which holds and manages the enterprise-wide master data for synchronizing across multiple companies.BYOND - Build Your OwN Device (audio synths, effects, DSP, sequencer, VST): Byond is an environment for audio and midi programming in C#. It's available as VST plugin or standalone application.CoveSmushbox: A simple .NET library and a Windows CLI to the SMUSH Box.FORMULA 2.0: Formula specifications are highly declarative logic programs that can express rich synthesis and verification problems.Grostbite Engine: Free 3D game engine.hMailServer from RoundCube: A RoundCube plugin for interacting with hMailServer 5.4B1950. The plugin allows to configure vacation configuration of hMailServer from RoundCube.KDG's IP Reporter: Baisc reporter for local and private IP addresses.LADNS Service Watcher: LADNS Service WatcherLampguiden: LampguidenMedia Recommender Service: We are 6 software engineer students developing a media recommendation service as part of our 3rd semester project. photograp: SPA Photo gallery. Work in progress... Power Buddy: The Windows power tray icon only displays two power plans. If this has bothered you since 2009, Power Buddy is for you. Power Buddy displays all of them.Programming Demos: This project contains demonstration code that may be helpful to people learning VisualBasic .NET.Prototype : Traveling Alone Website: A website aiming to become an online community for solo travelers.ShowDBPool: ShowDBPoolThali: Thali is about making it falling off a log easy for users to run their own services on their own devices by building a peer to peer web.TiendaWebCursoAccentureNet: aWpf PdfReader: This is a pdf reader, development based WPF and MuPDF,You can use the keyboard to operate it.This is pdf reader can save the user's open records.

    Read the article

  • Linq Left Outer Join

    - by Neil
    I am new to LINQ and am trying to convert a SQL query to LINQ: SQL: left outer join PRODUCT_BEST_USE pbu on pbu.PRODUCT_GUID = @uProductId and pbu.BEST_USE_GUID = bu.BEST_USE_GUID LINQ: from PBU in PRODUCT_BEST_USE.Where(PBU=>PBU.PRODUCT_GUID == p.PRODUCT_GUID).DefaultIfEmpty() When I add and PBU.BEST_USE_GUID equals BU.BEST_USE_GUID, I get an error: "A query body must end with a select clause or a group clause" Here is the full Linq query: from p in PRODUCT join BU in BEST_USE on p.CATEGORY_GUID equals BU.CATEGORY_GUID from PBU in PRODUCT_BEST_USE.Where(PBU=>PBU.PRODUCT_GUID == p.PRODUCT_GUID).DefaultIfEmpty() and PBU.BEST_USE_GUID equals BU.BEST_USE_GUID where p.PRODUCT_GUID == new Guid("d317752b-581d-4f43-92fa-4a4af91009f5") select new { BU.NAME, PBU.PRODUCT_BEST_USE_GUID }

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Boot Problem in Asus EEE PC 1015CX

    - by Sâmrat VikrãmAdityá
    I am a newbie to Linux world, although I have previously worked on Ubuntu 11.04 for daily use (Net Access and simple recordings using Audacity). I am not sure, at what level I stand as a newbie. I bought this Asus Eee PC two days back. The model is Asus 1015CX. See the specs here http://www.flipkart.com/asus-1015cx-blk011w-laptop-2nd-gen-atom-dual-core-1gb-320gb-linux/p/itmd8qu4quzu8srr . I created a live USB to install 12.10. The usb booted fine. When I clicked "Try Ubuntu" option, it showed me a black screen with a cursor blinking. I waited for 15 minutes and had to restart using the power button. On clicking the "Install Ubuntu" button, the install process went seamlessly. [I have a Windows7 installed on one of the partitions]. i installed it alongside previous windows installation. The system was then rebooted for the first time. It showed the GRUB menu and I selected the first option Ubuntu. After showing the splash screen for a second, it began showing various messages on a black screen and then it struck on "Stopping Save kernel state message". I had to force shut the system using power button. Sometimes it just gives a blank screen with a cursor blinking and on pressing power button, some messages stating that acpid is doing something and stopping services pops up and the system shuts down. I tried booting with "nomodeset" and other parameters as directed in solution to previous such problems on forums. Also Ctrl+Alt+F1,F2,F3,F4,F5,F6..F12 is not doing anything for me anywhere. At installation, I checked Login automatically option. On booting into recovery several options comes up. Clicking resume just gives me a blank screen with cursor blinking. on dropping to root shell and remounting filesystem as RW, I am able to supply some command that worked for others. startx -- Several messages comes up with last one stating Fatal error: No screen found sudo service lightdm start -- Gives a blank screen with a cursor blinking lspci | grep VGA -- Shows some Intel Integrated Graphic... something I don't remember I had reconfigured xserver-xorg, lightdm, reinstalled ubuntu-desktop, unity. What should I do..?? Will going back to 11.04 work..?? Or I should leave all hopes of running Ubuntu on my netbook. Please help.

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • make username and userid into array php

    - by Bharanikumar
    i want to my array , somthing like this manner array("userid"=>"username","1"=>"ganeshfriends","2"=>"tester") mysq query somthing like this $query = select username, userid from tbluser $result = mysql_query($query); while($row = mysql_fetch_array($result)){ $items = array($row['userid']=>$row['username']); } print_r($items); Can you tell me how to make userid as key and username as val... Thanks

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >