Search Results

Search found 18245 results on 730 pages for 'recursive query'.

Page 537/730 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • How can I find which "command" corresponds to opening a gnome-panel menu, for use in a keyboard shortcut?

    - by Ryan Jendoubi
    There are many questions and answers here and around the web on setting basic keyboard shortcuts in GNOME. Most of them are either for launching applications, or Compiz settings, or for changing defaults for other things for which Ubuntu provides defaults shortcuts. What I want to know though is how to refer to a gnome-panel menu item in a custom keyboard shortcut. I'm using Ubuntu 11.10 with GNOME Classic, and the old GNOME 2 / Ubuntu 10.04 keyboard shortcuts for the main menus (Alt-F1) and the "Me Menu" (Super+S) don't seem to work. So my question is two-fold. Primarily I'd like to know how to set those shortcuts. But a second-order question is how I could have found this out myself: is there some program I can use to see what signals or commands are fired off when I click on various things, in this case gnome-panel menu items? I'm interested in the broader question because I've sometimes wanted to set shortcuts for specific menus or menu items in GNOME 2, so a way to find out what command I need there would be useful. Give a man a rod, as they say :-) I've had a look at a good lot of keyboard shortcut and menu related items here to no avail. One somewhat relevant question is this one, but it's just a "how do I do it" question, and applies to Unity, not GNOME, although it would be great if whatever investigatory method answers this question might also apply under different desktops, like Unity. The answer to this question is essentially how I was doing it in 10.04 / GNOME 2, although the questioner's query isn't exactly addressed - how to get directly to "Broadcast" with a key combination. Again, it would be great if an answer delving into how such menus work and how they interact with the rest of the system would be applicable to pinpoint menu items.

    Read the article

  • Configuring squid as reverse proxy

    - by Hassan
    I am having trouble configuring squid to work as reverse proxy here is my scenario squid is installed on server with ip 10.1.1.139 I have another computer that is acting as my proxy server 10.1.85.106 which has access to 10.1.85.106/program I want 10.1.1.139/program to be redirected to 10.1.85.106 I have added cache_peer 10.1.85.106 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 /program /program/ program when I go to 10.1.1.139/program I get "The following error was encountered while trying to retrieve the URL: /program Invalid URL" Since the error is not related to access denied I don't think it is due to access restrictions. Do I need to add anything else? Thanks for your time

    Read the article

  • Using VirtualBox/VMWare to deploy software to multi sites ?

    - by Sim
    Hi all, I'm currently evaluating the feasibility of using VirtualBox (or VMWare) to deploy the follow project to 10 sites Windows XP MSSQL 2005 Express Edition with Advanced Services JBoss to run 1 in-house software that mostly query master data (customers/products) and feed to other software Why I want to do this ? Because the IT staffs in my 10 sites are not capable enough and the steps taken to setup those "in-house project" are also complicated What are the cons I can forsee ? Need extra power to run that virtualbox instance The IT staffs won't be much knowledgeable 'bout how to install the stuffs Cost (license for VirtualBox in commercial environment as well as extra OS license) I really seek your inputs on the pro/con of this approach, or any links that I can read further Thanks a lot

    Read the article

  • TraceTune: Larger Files and History

    - by Bill Graziano
    I updated TraceTune over the weekend.  I increased the trace file upload size to 20MB.  We’ve processed over half a million rows of trace data so far and I’m confident this won’t kill the server. I added average CPU and average disk reads to the screen that lists the SQL statements in a trace file. I only added these two.  I’m pretty sure average writes isn’t that import.  I’m still thinking about average duration.  I’m trying to balance showing you what you need with a clean, simple interface.  Plus I have a way to see the averages that I describe further down. TraceTune now keeps the last 10 files that you’ve uploaded and will give you some basic details about each file. I think the last change I made is the most interesting. For each SQL statement, I show the history of that statement. You’ll see each trace file where this statement was found.  It will list the averages for CPU, reads, writes and duration.  This will quickly show you if you’re improving the performance of that query.  In my screen shot above you can that even though the execution counts are very different the averages are consistent. If you want to see what queries are consuming the most resources on your server give TraceTune a try.

    Read the article

  • OpenLdap 2.4 on centos 6 doesn't listen on port 636

    - by Oliver Henriot
    I have an openldap 2.4 server on centos 6 whose confg I copied from those I have running under openldap 2.3 servers on centos 5 machines. On openldap 2.3, specifying TLSCACertificateFile, TLSCertificateFile and TLSCertificateKeyFile with correct values makes the server listen on port 636. This is not the case on the openldap 2.4 setup. I have configured it with loglevel -1 but I have not seen any clue as to what might be wrong and reading the openldap 2.4 manual doesn't indicate if any of the other TLS related parameters are now mandatory. I don't think so though because if I run the service manually, using "# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"", the server does listen on port 636 and I can query it using "ldapsearch -H ldaps://myserver:636". Is there something I am missing to get the server to listen on port 636 without having to always launch it manually? Is this linked to centos 6 or openldap 2.4? Thank you. Cheers,

    Read the article

  • Is there a way to obtain a Dell server's RAID configuration/level using only winrm/wsman? (ESXi servers)

    - by EGr
    I've seen videos describing how to configure RAID using wsman/winrm commands run against a server's iDRAC, but I can't seem to find anything that will just give me the current configuration and RAID levels. Is this possible? What uri would I use? If it matters, this is being run against M610s. Edit: If there is an easier way to obtain this information by running a script against the iDRAC, I'm not opposed to switching my methods. EDIT: The server is running ESXi, so if there is a way to obtain this through the vSphere client or PowerCLI, I can do that too. Overall, I just need a way to obtain the RAID configuration for multiple servers without having to query against the actual server (eg: via the iDRAC).

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • MySql calculate number of connections needed

    - by Udi I
    I am trying to figure my needs regarding web service hosting. After trying Azure I have realized that the default MySql they provide (through a third party) limits the account to 4 connections. You can then upgrade the account to 15, 30 or 40 connections (which is quite expensive). Their 15 connections plan is descirbed as: "Excellent choice for light test and staging apps that need a reliable MySQL database". I have 2 questions: if my application is a web service which needs to preform ~120k Queries a day (Normal/BELL distribution) and each query is ~150ms(duration)/~400ms(fetch), how many connection do I need? If instead of using cloud computing, I will choose a VPS, how many connections will I be able to handle on a 1GB 2 cores VPS? Thank you!

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • What would be the ideal SAS package to integrate with a Enterprise scale web application developed using Java

    - by Rajesh
    SAS-Java Connectivity and Integration for a enterprise class web application - I am trying to narrow down the available approaches to connect to SAS frm Java. I am asusming the following , please correct my assumption if you think is not correct SAS ACCESS (ODBC, JDBC) /SAS Share Net- Query SAS Datasets using a JDBC Model SAS/Intr Net (For Connectivity with Java and build small scale applications) SAS Integration Technology (For Connectivity with Java and build Distributed Java Applications) Now the scenario is i need to build a enterprise class web application using Java/J2EE and ensure this application talks to SAS for Querying SAS Datasets Execute SAS Programs and generate Reports I am looking for a cost effective and robust solution which will work in a Multi user environment.

    Read the article

  • Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

    - by Wang
    We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record? I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it. Is there any way to make the OS check the rest of the search path after it finds the username?

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • replace set of integers with respective string values

    - by Tripz
    I have a query which return the output like -- 5,4,6 Where 1 = apple, 2 = mango, 3 = banana, 4 = plum, 5 = cherry, 6 = kiwi etc. I would like to update my output as cherry,plum,kiwi instead of 5,4,6 How can I achieve that in the same select statment. I am okay to hard code the values. Please confirm May be I did explain clearly Here is the sample SELECT fruits FROM t_fruitid where id = 7 is returning me '5,6,4' as a single row Now I want to update this single row output as 'cherry,plum,kiwi' How do I do this Thanku

    Read the article

  • How do you configure ISC Bind to support GSS-TSIG Updates?

    - by netlinxman
    First, has anyone EVER configured ISC bind 9.5.0 OR greater with support for GSS-TSIG Dynamic DNS Updates AND gotten it to work? If so, what is the configuration that was used to make that happen? I feel close to having this working. I see that GSS cred passes w/o apparent error during the TKEY negotiation with an Active Directory DC and the BIND DNS server: client 192.168.0.30#52314: query gss cred: "DNS/[email protected]", GSS_C_ACCEPT, 4294967256 gss-api source name (accept) is [email protected] process_gsstkey(): dns_tsigerror_noerror client 192.168.0.30#52314: send But, when the Update is sent, it is refused: client 192.168.0.30#58330: update client 192.168.0.30#58330: updating zone 'example.com/IN': update failed: rejected by secure update (REFUSED) client 192.168.0.30#58330: send Does anyone have this working in the real world?

    Read the article

  • Internet Time tab has disappeared from the Date and Time applet of the Control Panel

    - by Robert Thornton
    Previously, there was an Internet Time tab on the Date and Time applet of the Control Panel, wherein one could force a query of an internet time server and also type in a different server from the ones supplied. However, this tab has now disappeared, and I need to have it back. I should mention that this machine has never been part of a domain, since it seems that machines that are such do not have such a tab. I should be obliged to anyone who can help me restore the missing tab. Windows 7 Home Premium Service Pack 1

    Read the article

  • How do I grant permissions to remotely start/stop a service using Powershell?

    - by splattered bits
    We have a PowerShell script that restarts a service on another computer. When we use PowerShell's built-in service control cmdlets, like so: $svc = Get-Service -Name MyService -ComputerName myservicehostname Stop-Service -InputObject $svc Start-Service -InputObject $svc We get this error back: Stop-Service : Cannot open MyService service on computer 'myservicehostname'. However, when we use sc.exe, like so: C:\Windows\System32\sc \\myservicehostname stop MyService C:\Windows\System32\sc \\myservicehostname start MyService the start and stop succeed. The user doing the restarting is not an administrator. We use subinacl to grant the user permissions to start/stop and query the service: subinacl.exe /service MyService /GRANT=MyServiceControlUser=STO How come PowerShell can't stop my service but sc.exe can?

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Windows diagnostic tools

    - by Tathagata
    I just joined a small computer lab which basically uses all Windows boxes and a few Windows Sever 2003. Coming from a *nix background I am missing all the nice tools that we have for diagnosis and troubleshooting. My question is what are the safe, free (not terribly concerned if it is like beer or freedom -both acceptable) tools out there that Windows system admins use for their day to day troubleshooting (network, applications, servers etc). I am aware of the tools that come with Windows (and their limited capabilites) - so my query is with what do you fill in your arsenal? (may be a list of win versions of tools that come in standard linux distros)

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • I can search users and site... but it will not search a blog that i use to view on my posted site fo

    - by Don
    I have always been able to search my blog where i post howtos for our company... However now i can only search for mysites or sites... when i click on a blog site that I created for Company Howtos i try to use the Search: This Site: "name of yhour site" click search i get the following No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. I have tried to do the following: 1) net stop osearch 2) net start osearch 3) iisreset /noforce ~ Please help! Don

    Read the article

  • Apache rewrite module, 404 not found

    - by Eneroth3
    I've been having some problems with rewriting directory styled addresses into query strings for my php scripts. Here's the code RewriteRule ^(\w+)/?(\w+)?/?(\w+)?/?$ /index.php?section=$1&category=$2&subcategory=$3 [QSA] This line works perfectly fine on both my local wamp and lamp server, and my friend's lamp server. However on the web hotel I've been using (freehostia) I only get a 404 error when trying to browse a "directory" that isn't really there (supposed to be generated by php). I've tried connecting their support but they only say 3rd party applications aren't their job. I know rewriteEngine is turned on because some basic redirect attempts have worked. Perhaps this line of code could be better written? It's quite important that extra queries are appended and would be nice (but not necessary) if the last slash could be left out. Any help is appreciated :)

    Read the article

  • Prevalence of WMI enabled in real Windows Server networks

    - by TripleAntigen
    Hi I would like to get opinions from systems administrators, on how common it is that WMI functionality is actually enabled in corporate networks. I am writing an enterprise network application that could benefit from the features of WMI, but I noted after creating a virtual network based on Server 2008 R2, that WMI seems to be disabled by default. Do systems admins in practical corporate networks enable WMI? Or is it usually disabled for security purposes? What is it used for if it is enabled? Thanks for any advice! MORE INFO: I should have said, I really need to be able to query the workstations but I understand that by default the WMI ports on Win7 and XP firewalls (at least) are disallowed, so do you use some sort of group policy or other method to leave a hole open for WMI on the workstations? Or is just the servers that are of interest? Thanks for the responses!!

    Read the article

  • Get-QADComputer -LdapFilter & NOT operator

    - by dboftlp
    I'm having issues excluding an OU from my LDAP filter $DaysAgo = (Get-Date).AddDays(-31) $ft = $DaysAgo.ToFileTime() Get-QADComputer -SizeLimit 0 -IncludeAllProperties -SearchRoot 'DC=My,DC=Domain,DC=Local' -LdapFilter "(&(objectcategory=computer)(lastLogonTimeStamp<=$ft) (!(ou:dn:=DisabledPCs))(|(operatingsystem=Windows 2000 Professional) (operatingSystem=Windows XP*)(operatingSystem=Windows 7*) (operatingSystem=Windows Vista*)(operatingsystem=Windows 2000 Server) (operatingsystem=Windows Server*)))" I'm looking to query for all Windows OS systems that haven't logged in to AD for more than 31 days & that are not already in the OU "DisabledPCs", which is where I'll be moving them to. When I run it now, I'm getting all the systems I'm looking for, including those in the "DisabledPCs" OU... I've tried several variations including: (&(!(ou:dn:=DisabledPCs))) As well as putting it in different locations in the filter (not that I thought it would make a difference, but I obviously don't know that...) Thanks in advance for any help, -dboftlp

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >