Search Results

Search found 22463 results on 899 pages for 'sub query'.

Page 652/899 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • LDAP not showing secondary groups

    - by Sandy Dolphinaura
    Currently, I have a LDAP server (running ClearOS if that makes any difference) containing a database of users. So, I went and setup LDAP on a couple of my debian VMs, using libpam-ldapd and I discovered this odd problem. My group/user mapping would show up when running getent group but the secondary groups would not show up when running id . Here is my /etc/nslcd.conf # /etc/nslcd.conf # nslcd configuration file. See nslcd.conf(5) # for details. # The user and group nslcd should run as. uid nslcd gid nslcd # The location at which the LDAP server(s) should be reachable. uri ldaps://10.3.0.1 # The search base that will be used for all queries. base dc=pnet,dc=sandyd,dc=me # The LDAP protocol version to use. #ldap_version 3 # The DN to bind with for normal lookups. binddn cn=manager,ou=internal,dc=pnet,dc=sandyd,dc=me bindpw Me29Dakyoz8Wn2zI # The DN used for password modifications by root. #rootpwmoddn cn=admin,dc=example,dc=com # SSL options ssl on tls_reqcert never # The search scope. #scope sub #filter group (&(objectClass=group)(gidNumber=*)) map group uniqueMember member

    Read the article

  • Windows Vista Nested Desktop Folders Problem

    - by Samuel Walker
    I have no idea how, nor when this happened, and it's started to really quite annoy me. When navigating through Explorer, by clicking on Icons I have C:\Users\Samuel\Desktop (Icon is the blue special Desktop icon), which contains the items I see on my Desktop. I then have the following folder: C:\Users\Samuel\Desktop (Icon is the standard yellow folder icon), which contains many program shortcuts, and is completely seperate from the other C:\Users\Samuel\Desktop Then in the Yellow Icon Desktop I have the sub-folder Desktop with the blue icon that is a direct mirror of the blue C:\Users\Samuel\Desktop folder (as in a new folder / file shows up in both). In explorer when I directly type C:\Users\Samuel\Desktop I am taken to the Yellow folder version. If I go to C:\Users\Samuel\Desktop\Desktop I am taken to the Blue folder version. Finally, from cmd cd'ing to C:\Users\Samuel\Desktop takes me to the Yellow folder version whilst C:\Users\Samuel\Desktop\Desktop takes me to the blue folder version. How on earth can I get rid of the yellow folder version leaving the blue C:\Users\Samuel\Desktop. I can't delete either as it says they're in use. UPDATE: Ok, so it looks like doing a dir from cmd lists only one Desktop folder - the Yellow one. In addition, it looks like I can't delete either of them (given that they both contain my 'Desktop'

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • How to remove all Couchdb versions in Ubuntu 10.04 (server)? ( after multiple installs )

    - by DjangoRocks
    Hi all, I have done multiple installs of CouchDB using sudo aptitude install couchdb sudo ap-get install couchdb and more recently based on the instructions found at L http://wiki.apache.org/couchdb/Installing_on_Ubuntu May I know how do I uninstall or remove all the above installations? Best Regards. +++++++++++++++++++UPDATE++++++++++++++++++++++++ I've tried running the following commands: apt-get remove couchdb apt-get purge couchdb but received the following errors: (Reading database ... 39814 files and directories currently installed.) Removing couchdb ... invoke-rc.d: initscript couchdb, action "stop" failed. dpkg: error processing couchdb (--remove): subprocess installed pre-removal script returned error exit status 1 invoke-rc.d: initscript couchdb, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: couchdb E: Sub-process /usr/bin/dpkg returned an error code (1) May I know how do i fix this? ON issuing the command : dpkg -l | grep couchdb I received the following response: rF couchdb 0.10.0-1ubuntu2 RESTful document oriented database, system D iF couchdb-bin 0.10.0-1ubuntu2 RESTful document oriented database, programs How do i uninstall CouchDB ? I think there's some file corruption?

    Read the article

  • Permissions won't cascade more than 1 level

    - by Jovin_
    Running Windows Small Business Server 2011 I have a file structure with a lot of sub folders (sometimes 5-6 levels deep). I have created access groups to grant access to my users, and also deny groups to deny access to others. X Access & X Deny. These allow or deny access to a mapped network drive X: On the server I put in the groups with Full Control Allow for X Access and Full Control Deny for X Deny, I also tick the box "Apple these permissions to objects and/or containers within this container only" and have ensured that "Apply to:" is "This folder, subfolders and files". But for some reason the permissions will only apply to the next level of folders & files. ex. structure: X: Folder 1 Folder 1a Folder 2 Folder 2a If I apply the permissions to X: it'll only go to Folder 1 & 2, not 1a and 2a, I then need to manually apply the permissions to these too. Is this working as intended or am I doing something wrong?

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Active Directory LDAP and user issues (using apache2 for svn access)

    - by CaCl
    I currently have a setup where I work that lets users use their active directory domain logins and passwords to authenticate and authorize access to Subversion. Currently I need to allow application accounts the same access. So our IT group creates application accounts in the active directory for us to use. But they want to be "secure" so they set the "Workstations Allowed" to be only a limited number of workstations. So when an application account hits the apache2 server for authentication they can't login for some reason and I'm having a heck of a time trying to debug. The error logs only show me: [Tue Apr 06 11:24:25 2010] [warn] [client 24.24.24.24] [3469] auth_ldap authenticate: user appuser13 authentication failed; URI /svn [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue Apr 06 11:24:25 2010] [error] [client 24.24.24.24] user appuser13: authentication failure for "/svn": Password Mismatch I've checked the password numerous times and it appears to be correct but I can't seem to get the user to authenticate properly. Below is a snippet of the apache configuration for ldap: # Auth providers # Active Directory <AuthnProviderAlias ldap ldap1> AuthBasicProvider ldap AuthLDAPURL "ldap://dmain.company.com:389/dc=dmain,dc=company,dc=com?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "CN=svnuser13,OU=Application Accounts,dc=dmain,dc=teradata,dc=com" AuthLDAPBindPassword secret3 </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias ldap ldap2> AuthBasicProvider ldap AuthLDAPURL ldap://diffldapserver:389/dc=specialusers,dc=com?uid </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias file file1> AuthUserFile /var/svn/auth/htpasswd </AuthnProviderAlias> <Location /svn> DAV svn SVNPath /var/svn Satisfy Any Require valid-user AuthType Basic AuthName "SVN Repository" AuthBasicProvider ldap1 file1 ldap2 AuthzSVNAccessFile /var/svn/auth/access AuthzLDAPAuthoritative on Require valid-user </Location> Any help, like tips for debugging is appreciated!

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • How to fix subfolders IIS7 functionality?

    - by Amr ElGarhy
    I have a problem in my sharing hosting that all websites in subfolders, their URL appear like this: http://amrelgarhy.com/amrelgarhy/ I sent to godaddy, and they sent me that its because of IIS7 and they can't solve, any one can tell me how to fix that? Here what i sent to godaddy and their reply: "as i saw before on this page http://www.godaddy.com/gdshop/hosting/shared.asp?ci=9009 compare windows plans, "Multiple Web sites: unlimited" so i have the right to run more than one website inside my hosting. But what i am facing now that i can't make more than website as a primary website. I have igurr.com as a primary website, i want to make others as primary because: I am facing a problem that all home pages for the other websites "which physically in sub folders" are like that "http://amrelgarhy.com/amrelgarhy/" the URL + the folder name and that what i don't want." GODADDY "Thank you for contacting Hosting Support. The behavior you are describing is standard for IIS 7.0 accounts. All alias domains in this environment will append the foldername their located in. I.E. a an alias domain www.coolexample.com pointed to the '/example' directory will display in a browser as "www.coolexample.com/example". This is due to the way IIS 7.0 handles virtual directories. Unfortunately we do not have any direct work around for this. We apologize for any inconvenience this may cause. "

    Read the article

  • HP Procurve 2610 intervlan routing

    - by user19039
    Can anyone tell me why inter vlan routing is working for all vlans except my newly created vlan 4/ I have an hp procurve 2610. Any help would be appreciated. I have basically this 1 switch with all unmanaged switches attached to the core. We have a second 2610 on port 28 Running configuration: ; J9085A Configuration Editor; Created on release #R.11.25 hostname "Core_HP" interface 22 speed-duplex 100-full exit ip routing snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 1-12,17-22,26-27 ip address 192.168.4.6 255.255.255.0 tagged 25 no untagged 13-16,23-24,28 exit vlan 2 name "WAN" untagged 28 ip address 10.254.254.3 255.255.255.0 exit vlan 3 name "Wireless" untagged 13-16,24 ip address 192.168.7.6 255.255.255.0 ip helper-address 192.168.4.2 tagged 27 exit vlan 35 name "guest" untagged 23 tagged 24 exit vlan 4 name "esxi" untagged 25 ip address 10.10.1.1 255.255.248.0 exit ip route 192.168.5.0 255.255.255.0 10.254.254.1 ip route 192.168.6.0 255.255.255.0 10.254.254.1 ip route 0.0.0.0 0.0.0.0 192.168.4.10 show ip route IP Route Entries Destination Gateway VLAN Type Sub-Type M etric Dist. ------------------ --------------- ---- --------- ---------- - --------- ----- 0.0.0.0/0 192.168.4.10 1 static 1 1 10.10.0.0/21 esxi 4 connected 0 0 10.254.254.0/24 WAN 2 connected 0 0 127.0.0.0/8 reject static 0 250 127.0.0.1/32 lo0 connected 0 0 192.168.4.0/24 DEFAULT_VLAN 1 connected 0 0 192.168.5.0/24 10.254.254.1 2 static 1 1 192.168.6.0/24 10.254.254.1 2 static 1 1 192.168.7.0/24 Wireless 3 connected 0 0 show ip Internet (IP) Service IP Routing : Enabled Default TTL : 64 Arp Age : 20 VLAN | IP Config IP Address Subnet Mask Prox y ARP ------------ + ---------- --------------- --------------- ---- ----- DEFAULT_VLAN | Manual 192.168.4.6 255.255.255.0 No WAN | Manual 10.254.254.3 255.255.255.0 No Wireless | Manual 192.168.7.6 255.255.255.0 No esxi | Manual 10.10.1.1 255.255.248.0 No guest | Disabled

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • Tomcat web application intermittent freeze

    - by tinny
    I have a Grails web application (just a standard war file) deployed on a Ubuntu 10.10 server running on tomcat 6. My database is postgresql. The problem is that every so often (once or twice a day after inactivity) when I try to log into this web application it just freezes. I can navigate to the login page but when I try and login (first time the DB is hit, might be a clue..?) the application just freezes indefinitely, no 500 response code... the browser just waits and waits. I followed the instructions detailed here because the problem described sounded the same as mine. My GC logging showed no long running GC, all sub sec. When the application freezes a jmap heap output is... using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 536870912 (512.0MB) NewSize = 21757952 (20.75MB) MaxNewSize = 87228416 (83.1875MB) OldSize = 65404928 (62.375MB) NewRatio = 7 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 85983232 (82.0MB) Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 19595264 (18.6875MB) used = 11411976 (10.883308410644531MB) free = 8183288 (7.804191589355469MB) 58.23843965562291% used Eden Space: capacity = 17432576 (16.625MB) used = 9249296 (8.820816040039062MB) free = 8183280 (7.8041839599609375MB) 53.05754009046053% used From Space: capacity = 2162688 (2.0625MB) used = 2162680 (2.0624923706054688MB) free = 8 (7.62939453125E-6MB) 99.99963008996212% used To Space: capacity = 2162688 (2.0625MB) used = 0 (0.0MB) free = 2162688 (2.0625MB) 0.0% used concurrent mark-sweep generation: capacity = 101556224 (96.8515625MB) used = 83906080 (80.01907348632812MB) free = 17650144 (16.832489013671875MB) 82.62032270912317% used Perm Generation: capacity = 85983232 (82.0MB) used = 62866832 (59.95448303222656MB) free = 23116400 (22.045516967773438MB) 73.1152232100324% used Anyone know what "From Space:" is? Any ideas on further fault finding ideas? I dont have much experience with this type of fault finding.

    Read the article

  • Sharing two SSL wildcard certificates in memory in nginx

    - by hvtilborg
    I have an nginx server running with two IP addresses, say 1.2.3.4 and 4.3.2.1. Besides there are two wildcard SSL certificates for *.example.net (i.e. wc1, pointing to 1.2.3.4) and *.sub.example.net (i.e. wc2, pointing to 4.3.2.1). The nginx docs mention that you can share a wildcard certificate between server instances like this: ssl_certificate wc1.crt; ssl_certificate_key wc1.key; server { listen 1.2.3.4:443; server_name www.example.net; ssl on; ... } server { listen 1.2.3.4:443; server_name test.example.net; ssl on; ... } However, I was wondering whether this same construct is possible to use with the second wildcard certificate too. Both domains have around 500 subdomains. Do they not get mixed up, since the ssl_certificate construct is now global?

    Read the article

  • wildcard deal with www as a subdomain

    - by Alaa Gamal
    i am using wildcard with apache my APACHE CONFIG: ServerAlias *.staronece1.com DocumentRoot /staronece1/domains my named file $ttl 38400 staronece1.com. IN SOA staronece1.com. email.yahoo.com. ( 1334838782 10800 3600 604800 38400 ) staronece1.com. IN NS staronece1.com. staronece1.com. IN A 95.19.203.21 www.staronece1.com. IN A 95.19.203.21 server.staronece1.com. IN A 95.19.203.21 mail.staronece1.com. IN A 95.19.203.21 ns1.staronece1.com. IN A 95.19.203.21 ns2.staronece1.com. IN A 95.19.203.21 staronece1.com. IN NS ns1.staronece1.com. staronece1.com. IN NS ns2.staronece1.com. staronece1.com. IN MX 10 mail.staronece1.com. * 14400 IN A 95.19.203.21 *.staronece1.com IN A 95.19.203.21 my php test file /staronece1/domains/index.php <?php function getBname(){ $bname=explode(".",$_SERVER['HTTP_HOST'],2); return $bname[0]; } echo 'SubDomain is :'.getBname(); ?> if i go to something.staronece1.com i get this result SubDomain is : something No the problem is if i go to www.staronece1.com i should get empty result, because www is not a sub domain but i get this result SubDomain is : www And if i go to www.something.staronece1.com i get firefox error message ( site not found ) How to fix this problem?? i think the solution is: added record for www in named file Thanks

    Read the article

  • can't figure out why apache LDAP auth fails

    - by SethG
    Suddenly, yesterday, one of my apache servers became unable to connect to my LDAP (AD) server. I have two sites running on that server, both of which use LDAP to auth against my AD server when a user logs in to either site. It had been working fine two days ago. For reasons unknown, as of yesterday, it stopped working. The error log only says this: auth_ldap authenticate: user foo authentication failed; URI /FrontPage [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server], referer: http://mysite.com/ I thought perhaps my self-signed SSL cert had expired, so I created a new one for mysite.com, but not for the server hostname itself, and the problem persisted. I enabled debug-level logging. It shows the full SSL transaction with the LDAP server, and it appears to complete without errors until the very end when I get the "Can't contact LDAP server" message. I can run ldapsearch from the commandline on this server, and I can login to it, which also uses LDAP, so I know that the server can connect to and query the LDAP/AD server. It is only apache that cannot connect. Googling for an answer has turned up nothing, so I'm asking here. Can anybody provide insight to this problem? Here's the LDAP section from the apache config: <Directory "/web/wiki/"> Order allow,deny Allow from all AuthType Basic AuthName "Login" AuthBasicProvider ldap AuthzLDAPAuthoritative off #AuthBasicAuthoritative off AuthLDAPUrl ldaps://domain.server.ip/dc=full,dc=context,dc=server,dc=name?sAMAccountName?sub AuthLDAPBindDN cn=ldapbinduser,cn=Users,dc=full,dc=context,dc=server,dc=name AuthLDAPBindPassword password require valid-user </Directory>

    Read the article

  • Blender refuses to start

    - by Sekhemty
    I'm trying to run Blender under Linux, but I'm unable to do that, whenever I try I get some errors. I'm using Kubuntu 12.04 with KDE 4.11.1. This is my video card: ~$ lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV610/M74 [Mobility Radeon HD 2400 XT] I used to have installed the fglrx proprietary Catalyst drivers, but lately they gave me some system-wide problems and I had to revert to the open source Mesa drivers (I don't think that these details are important, but just in case, the whole story is here). Whit the fglrx drivers Blender was running fine, but now, whenever I try to start it, I get this error message (some parts are in italian, but I think that they are easily understandable): ~$ blender connect failed: No such file or directory Writing: /tmp/blender.crash.txt Errore di segmentazione (core dump creato) The content of /tmp/blender.crash.txt is as follows: # Blender 2.68 (sub 5), Revision: 60150 # backtrace /usr/lib/blender/blender() [0x877a41f] [0xb7756400] /usr/lib/i386-linux-gnu/libLLVM-3.0.so.1(_ZN4llvm3ARM8SPRClassC1Ev+0x15) [0xa8f4a9d5] /usr/lib/i386-linux-gnu/libLLVM-3.0.so.1(+0x25ca48) [0xa8eefa48] /lib/ld-linux.so.2(+0xeeab) [0xb7765eab] /lib/ld-linux.so.2(+0xef94) [0xb7765f94] /lib/ld-linux.so.2(+0x12fa6) [0xb7769fa6] /lib/ld-linux.so.2(+0xeccf) [0xb7765ccf] /lib/ld-linux.so.2(+0x127f4) [0xb77697f4] /lib/i386-linux-gnu/libdl.so.2(+0xbe9) [0xb4ff9be9] /lib/ld-linux.so.2(+0xeccf) [0xb7765ccf] /lib/i386-linux-gnu/libdl.so.2(+0x133a) [0xb4ffa33a] /lib/i386-linux-gnu/libdl.so.2(dlopen+0x47) [0xb4ff9c97] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x3cbf0) [0xb7717bf0] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x4079d) [0xb771b79d] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x1a3aa) [0xb76f53aa] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(glXQueryVersion+0x2e) [0xb76f0cee] /usr/lib/blender/blender(_ZN15GHOST_WindowX11C1EP15GHOST_SystemX11P9_XDisplayRK10STR_Stringiijj18GHOST_TWindowStatei25GHOST_TDrawingContextTypebbt+0x11c) [0x8f54aec] /usr/lib/blender/blender(_ZN15GHOST_SystemX1112createWindowERK10STR_Stringiijj18GHOST_TWindowState25GHOST_TDrawingContextTypebbti+0xd7) [0x8f4f4a7] /usr/lib/blender/blender(GHOST_CreateWindow+0xb6) [0x8f4cf86] /usr/lib/blender/blender(wm_window_add_ghostwindows+0x205) [0x8799be5] /usr/lib/blender/blender(WM_check+0x50) [0x877b670] /usr/lib/blender/blender(wm_homefile_read+0x111) [0x87859f1] /usr/lib/blender/blender(WM_init+0xd2) [0x8787872] /usr/lib/blender/blender(main+0xe6e) [0x873848e] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0xb4e694d3] /usr/lib/blender/blender() [0x8778a99] The only thing that I can guess from this report is that the mesa drivers are somewhat involved, as I already suspected, but I don't have a clue on what I need to do to try to solve the issue.

    Read the article

  • How do I fix a custom Event Viewer Log that merges automatically with the Application log?

    - by NightOwl888
    I am trying to create a custom event log for a Windows Service on Windows Server 2003. I would like to name the custom log "(ML) Startup Commands". However, when I add a registry key with that name to HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\, it adds a log but shows the exact same events that are in the Application log when looking in the event viewer. If I add a registry key with the name "(ML) Startup Commands 2" to the event log, it shows a blank event log as expected. In fact, any other name will work correctly except for the one I want. I have searched through the registry for other keys with the string "(ML)" and removed all other references to this key name, however I continue to get merged results in the viewer when I create a key with this name. My question is, how can I fix the server so I can create a custom event log with this name that shows only the events from my application, not the events from the default Application event log that is installed with Windows? Update: I rebooted the server and woudn't you know it, the log started acting normally. I got a strange error message in the Application log: The EventSystem sub system is suppressing duplicate event log entries for a duration of 86400 seconds. The suppression timeout can be controlled by a REG_DWORD value named SuppressDuplicateDuration under the following registry key: HKLM\Software\Microsoft\EventSystem\EventLog. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I can only hope this error doesn't mean the problem will come back after 86400 seconds. I guess I will have to wait and see.

    Read the article

  • Unable to Mange DNS via MMC

    - by IT Helpdesk Team Manager
    When trying to access the DNS service on Microsoft Windows Server 2003 (Build 3790) domain controller/schema master via the MMC DNS snap in or locally via the DNS MMC from Administrative tools I'm getting a red "X" through the icon for the DNS Server. The inability to access DNS management via MMC happens on all domain controllers as well. We've looked at items such as the DHCP client not being started, incorrect DNS setup ( the machine points at itself and another DC ), the DNS service not running ( it is and all DNS queries via NSLOOKUP work correctly ), dslint returns the correct information and functions as expected. There is the following entry in the DNS event log: The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 0000051b dnscmd fails with RPC server unavailable yet RPC is started: C:\Documents and Settings\Administrator.DOMAIN>dnscmd /Info Info query failed status = 1722 (0x000006ba) Command failed: RPC_S_SERVER_UNAVAILABLE 1722 (000006ba) DCDIAG /TEST:DNS /V /E produces the following errors: Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1753 (Type: Win32 - Description: There are no more endpoints available from the endpoint mapper.)] Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1722 (Type: Win32 - Description: The RPC server is unavailable.)] The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. A DNS query for _ldap._tcp.dc._msdcs. returns the correct results. All domain and ADS related activities are working except that I can't manage my DNS via MMC or dnscmd. Any thoughts or solutions would be greatly appreciated. EDIT: Adding Registry export per request: Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc Class Name: <NO CLASS> Last Write Time: 10/18/2012 - 2:29 PM Value 0 Name: DCOM Protocols Type: REG_MULTI_SZ Data: ncacn_ip_tcp Value 1 Name: UuidSequenceNumber Type: REG_DWORD Data: 0xb19bd0f Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: ncacn_np Type: REG_SZ Data: rpcrt4.dll Value 1 Name: ncacn_ip_tcp Type: REG_SZ Data: rpcrt4.dll Value 2 Name: ncadg_ip_udp Type: REG_SZ Data: rpcrt4.dll Value 3 Name: ncacn_http Type: REG_SZ Data: rpcrt4.dll Value 4 Name: ncacn_at_dsp Type: REG_SZ Data: rpcrt4.dll Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NameService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: DefaultSyntax Type: REG_SZ Data: 3 Value 1 Name: Endpoint Type: REG_SZ Data: \pipe\locator Value 2 Name: NetworkAddress Type: REG_SZ Data: \\. Value 3 Name: Protocol Type: REG_SZ Data: ncacn_np Value 4 Name: ServerNetworkAddress Type: REG_SZ Data: \\. Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NetBios Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\RpcProxy Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: Enabled Type: REG_DWORD Data: 0x1 Value 1 Name: ValidPorts Type: REG_SZ Data: pdc:100-5000 Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\SecurityService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: 9 Type: REG_SZ Data: secur32.dll Value 1 Name: 10 Type: REG_SZ Data: secur32.dll Value 2 Name: 14 Type: REG_SZ Data: schannel.dll Value 3 Name: 16 Type: REG_SZ Data: secur32.dll Value 4 Name: 1 Type: REG_SZ Data: secur32.dll Value 5 Name: 18 Type: REG_SZ Data: secur32.dll Value 6 Name: 68 Type: REG_SZ Data: netlogon.dll

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    Hi, We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • ssh connection slow when using @hostname.com but now when using @ipaddress

    - by Alex Recarey
    When connecting to a Debian server using ssh, if I use [email protected] (the IP address of hte server) the connection is instant. If however I use [email protected] (a DNS redirected to the IP address of the server) the ssh connection hangs for a 20 seconds before connecting successfully. The ssh logs show the following: [alex@alex home]$ ssh -v -v [email protected] OpenSSH_5.5p1, OpenSSL 1.0.0c-fips 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 and here it hangs during 20 seconds before continuing. I think it might have something to do with reverse DNS or similar (the server does not really "know" it's name is hostname.com, it just has that DNS rediriected to its IP address). I have added the following options to /etc/ssh/sshd_config: UseDNS no GSSAPIAuthentication no to no effect. The server's DNS records in /etc/resolv.conf are configured correctly: ping hostname.com PING sub.domain.com (X.X.X.X) 56(84) bytes of data. 64 bytes from replicant (X.X.X.X): icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from replicant (X.X.X.X): icmp_seq=2 ttl=64 time=0.050 ms?s Thanks for the help. Solution: It seems the DSL router my ISP saddled me with was causing the trouble. Changing my DNS server from 192.168.1.1 (router's IP) to google's (8.8.8.8, always good to know when you are in a hurry) instantly solved the connection delay problem. I am guessing that the 50€ router provided does not cache DNS entries, although I don't understand why pinging the DNS address had no delay, and 20 seconds is too long of a wait, even for uncached DNS. Tnanks again for the help!

    Read the article

  • How can I use varnish to generate a robots.txt file even for subdomain of the same site?

    - by Sam
    I want to generate a robots.txt file using Varnish 2.1. That means that domain.com/robots.txt is served using Varnish and also subdomain.domain.com/robots.txt is also served using Varnish. The robots.txt must be hardcoded into default.vcl file. is that possible? I know Varnish can generate a maintenance page on error. I'm trying to make it generate a robots.txt file. Can anyone help? sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Maintenance in progress</title> </head> <body> <h1>Maintenance in progress</h1> </body> </html> "}; return (deliver); }

    Read the article

  • Re-configure Office 2007 installation unattended: Advertised components --> Local

    - by abstrask
    On our Citrix farm, I just found out that some sub-components are "Installed on 1st Use" (Advertised), which does play well on terminal servers. Not only that, but you also get a rather non-descriptive error message, when a document tried to use a component, which is "Installed on 1st Use" (described on Plan to deploy Office 2010 in a Remote Desktop Services environment): Microsoft Office cannot run this add-in. An error occurred and this feature is no longer functioning correctly. Please contact your system administrator. I have ~50 Citrix servers where I need to change the installation state of all Advertised components to Local, so I created an XML file like this: <?xml version="1.0" encoding="utf-8"?> <Configuration Product="ProPlus"> <Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /> <Logging Type="standard" Path="C:\InstallLogs" Template="MS Office 2007 Install on 1st Use(*).log" /> <Option Id="AccessWizards" State="Local" /> <Option Id="DeveloperWizards" State="Local" /> <Setting Id="Reboot" Value="NEVER" /> </Configuration> I run it with a command like this (using the appropriate paths): "[..]\setup.exe" /config ProPlus /config "[..]\Install1stUse-to-Forced.xml" According to the log file, the syntax appears to be accepted and the config file parsed: Parsing command line. Config XML file specified: [..]\Install1stUse-to-Forced.xml Modify requested for product: PROPLUS Parsing config.xml at: [..]\Install1stUse-to-Forced.xml Preferred product specified in config.xml to be: PROPLUS But the "Final Option Tree" still reads: Final Option Tree: AlwaysInstalled:local Gimme_OnDemandData:local ProductFiles:local VSCommonPIAHidden:local dummy_MSCOMCTL_PIA:local dummy_Office_PIA:local ACCESSFiles:local ... AccessWizards:advertised DeveloperWizards:advertised ... And the components remain "Advertised". Just to see if the installation state is overridden in another XML file, I ran: findstr /l /s /i "AccessWizards" *.xml Against both my installation source and "%ProgramFiles%\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller", but just found DefaultState to be "Local". What am I doing wrong? Thanks!

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >