Search Results

Search found 42115 results on 1685 pages for 'access management'.

Page 1143/1685 | < Previous Page | 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150  | Next Page >

  • finding the user of iis apppool \ defaultapppool

    - by LosManos
    My IIS apppool user is trying to create a folder but fails. How do I find out which User it is? Let's say I don't know much about IIS7 but need to trace whatever is happening through tools. Place of crime is WinSrv2008 with IIS7. So I fire up Sysinternals/ProcessMonitor to find out what is happening. I find Access denied on a folder just as I suspected. But which user? I add the User column to the output and it says IIS Apppool\Defaultapppool in capitals. Well... that isn't a user is it? If I go to IIS and its Apppools and Advanced settings and Process model and Identity I can see clues about which user it is but that is only because I know IIS. What if it had been Apache or LightHttpd or whatever? How do I see the user to give the appropriate rights to?

    Read the article

  • Asus G53SX How to use the recovery partition

    - by Amento
    I am trying to use the recovery partition on my Asus G53SX laptop, but the instructions in the included booklet don't match up with what happens on the computer. It says press F9 during bootup and then press ENTER to select windows setup. Then select the language you want to recover, and so on. When I press F9 I end up in the boot manager and from there I can access safe mode and all these things. The closest thing I can find in this list is "Repair your computer" but this menu takes me to recovery points and backup images, none which are mentioned in the booklet. How can I use the recovery partition to restore my laptop to factory state?

    Read the article

  • Transferring files from computer to Android Simulator SD Card ?

    - by mgpyone
    I've tried Android Simulator for Mac and can use it well. also I've set 100 MB for SD Storage for that simulator. however, I don't found a way of transferring files from my Mac to that Android Simulator SD Storage. Current solution is I've to send files to my mail and have to access via Simulator ,then download to it . well, but it's not available fro all formats . something like image file(.img) are not allowed to download to the simulator. I've seek any folder of SD Card for Simulator within Android Folder I've extracted. I've found nothing. I want to transfer files from my HD to Android simulator SD card storage. Thus, is there any effective solution that support my idea ? I'm on Mac OS X 10.6.2.

    Read the article

  • My server freezes within a few hours of logging out. Staying logged in keeps the server running

    - by HappyEngineer
    I have an Ubuntu Godaddy server I use to host mail and webapps. It started having problems a couple months ago. It would lock up and stop responding to anything. I couldn't ssh into it, so I'd have godaddy power cycle the server. I have never seen anything that looked suspicious in the var logs (although I'm no expert at reading them). An fsck turned up no problems. Godaddy replaced the ram, but found no hardware problems. I started logging the output from "top" to a log file and found that even that stops running when the server freezes. Now, here is the crazy part: It got so bad that it would actually go down every few hours, but then it stopped going down. I eventually realized I had left an ssh terminal logged into the machine running top. This seemed unlikely to be a reason, but after the server was up with no problems for a full week (remember, it had been going down after just a few hours), I disconnected from the ssh session. Lo and behold, within a few hours the server froze again! I had them power cycle again and then left another ssh session open with top. It has been going without problems for 8 days now. I told others about this and they hardly believe me. I simply can't imagine what is going on. I don't know what else to try other than to just get a new server and reinstall everything. Does anyone have any ideas about what I can look for to determine what the cause is? Is it possible there's some sort of exploit on the server which only runs if everyone is logged out of the system? EDIT: The power management gone haywire sounds plausible, so I've modified the /boot/grub/menu.lst to boot with acpi=off and apm=off. It appears to have prevented kacpid and kacpid_notify from being in the process list, so I assume I did that right. I've disconnected all my sessions from the server. I'll check later tonight to see if it's still up. If it goes down then I'll try the pinging process idea. EDIT: It went down again. It lasted about a day. I've had them reboot, so now I'll try running "nohup ping -i 5 google.com &" and then disconnect. If it goes down again I'll come back. Hopefully someone will have some more ideas.

    Read the article

  • Need help generating a core dump from apache segfault

    - by blockhead
    I have a script which intermittently returns a white screen of death in firefox and Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error. chrome. When I try to access the script using a PHP HTTP client (like Zend_Http_Client), intermittently I get an exception (sorry I don't have the exact message on me at the moment). I suspect a segfault. This is further buttressed by the lines in my error log that look like this: [Thu Mar 18 16:03:02 2010] [notice] child pid 845 exit signal Segmentation fault (11) Now, I'm running RedHat, and I know that RedHat doesn't generate core dumps out-of-the-box. I followed the instructions here http://kbase.redhat.com/faq/docs/DOC-5353, but I'm not seeing any core dumps. How do I generate a core dump?

    Read the article

  • Error pushing to remote with git

    - by pcm2a
    I have a fresh Centos 6 server stood up and I have installed git version 1.7.1 through yum. I am using the smart http method through apache for access. When I try to push to the remote server this is what I get: $ git push origin master Password: Counting objects: 6, done. Compressing objects: 100% (3/3), done. Writing objects: 100% (6/6), 436 bytes, done. Total 6 (delta 0), reused 0 (delta 0) error: unpack failed: index-pack abnormal exit I have tried these things which made no difference: chown -R apache:apache /path/to/git/repository (httpd runs as apache) chown -R apache:users /path/to/git/repository chmod -R 777 /path/to/git/repository (obviously not secure but wanted to eliminate this being a file permission problem) What can I try to get pushing to work?

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • Ubuntu 10.04 server crash

    - by Jamie
    I'm running an Ubuntu 10.04 (x64) as a web/mysql server. The server became unresponsive to SSH, Ping, HTTP etc. and the technician with physical access to the machine sent me this screengrab here: http://img442.imageshack.us/img442/389/img00062201012211332.jpg from the connected monitor before he rebooted (and the situation is fixed). I'm not sure what log this information is kept in as I can't find the text after checking the logs after reboot. Can anyone help me to investigate what happened to try and ensure it doesn't happen again? Thanks

    Read the article

  • Network - Routers conflicting in my subnet

    - by Richard
    I have a router whose IP is 192.168.1.1 and I be experiencing conflict with another router on my subnet (which probably has 192.168.1.1 as IP too). I think when it tries to connect to the network, eventually taking the place of my router because when I try to access the config page of my router, which appears to me is a config page from another router that is not mine. Do you have any solution except to change the IP of my router to an IP not common? As I have set up the exclusivity that only my IP router? I work with dynamic IPs (wireless networking) and static (for wired). How do I? I just wanna to do some setting in my router that affects all other, so that doesn't conflicts anymore.

    Read the article

  • How to restore missing space in NTFS file systems

    - by jacobsee
    I have a 40 GB USB hard drive formatted with NTFS on a PC running Windows XP Pro, SP3. I am trying to free as much space as possible. Windows Explorer tells me that I have about 200 MB of files on the drive (showing hidden and system files). When I show drive properties however it shows 73% free, around 10 GB used. I ran CHKDSK and it found all kinds of problems. Now running defrag and it is behaving as if there were 10 GB of files, but I can't access them anywhere. How to find and remove this extra 10GB?

    Read the article

  • BlueCoat reverse proxy NTLM authentication

    - by mathieu
    Currently when we want to access an internal site from Internet (IIS with NTLM auth), we have two login screens that appear : step1 : LDAPAuth, from the BlueCoat that check login/password validity against Active Directory step2 : NTLM auth, from our application. Is it possible to configure the reverse proxy to use the LDAP credentials provided at step1, and give them to whatever application that requests them ? Of course, if those credentials aren't valid, nothing happens. We're using BlueCoat SG400. Update : we're not looking for SSO where the user doesn't have to enter a password. We want the user to enter his domain credentials in the LDAPAuth dialog box, and the proxy to reuse it to authenticate against our application. Or any application that uses NTLM. We've only got 1 AD domain behind the reverse proxy.

    Read the article

  • How to add nvidia drivers after previous failure with linux mint?

    - by LessThanMe
    Before today, I had perfectly good drivers from nvidia for my linux mint (15) box. I decided to update it because my performance in TF2 is less than stellar, and then things went south. I used synaptic to install nvidia-331 and then rebooted, but when I selected Mint in GRUB I waited...and waited...and waited. Nothing happened, but the display stayed on (a completely black video was being output). So I went into recovery mode from GRUB, went to root access, and apt-get remove --purge nvidia*'d my way out of that mess, and installed nvidia-common. Now my performance in graphic intensive stuff (read: games, blender) sucks, so I've been through the same thing a few times trying to re-install nvidia-current. I just want to get it back how it was. Thanks for any help! Nvidia GTX 560

    Read the article

  • Website running on Tomcat port 8443 will only resolve with IP address, not dns

    - by littleK
    I recently set up a web server running tomcat 7 on Ubuntu 12.04. It is currently running on port 8080, however I just enabled SSL on port 8443. Here's my problem: For port 8080, the website is resolved with DNS: (http://www.mywebpage.com:8080) For port 8443, I can only access the website with the IP Address (http://0.0.0.0:8443). It will not work if I use the DNS name. I ultimately want to disable port 80 and use port 8443 only. Does anyone know why I cannot resolve the website on port 8443 using DNS, and how I might fix it? Thanks!

    Read the article

  • Ubuntu with Netatalk and Samba TimeMachine can't connect

    - by Philip
    I installed netatalk on my Ubuntu Server a few weeks ago and configured it so that I could use Timemachine from my mac to backup on a server instead of a external hard drive. It worked really good until yesterday when I installed Samba to be able to share certain folders on my server to my mac. Now I receive an error msg: There are no shares available or you are not allowed to access them on the server. Please contact your system administrator to resolve the problem. From what I understand is that the problem is on the server and not on my mac. I have tried to restart the computer and without adding any of the folders Samba is sharing adding the timemachine "afp://...@...". Is there a problem running them both at the same time, do I need to configure samba so that it doesn't reject afp? I'm pretty new at this...

    Read the article

  • Unable to configure Ruby with readline

    - by Liam Berg
    1) ./configure --prefix=$HOME/.packages --with-readline-dir=$HOME/.packages 2) configure: WARNING: unrecognized options: --with-readline-dir I am trying to setup the most up-to-date version of Ruby on my webhost (I do not have sudo access). Line 1 is the configure command I used for Ruby and Line 2 is the first printed line after executing 'configure'. I've googled this issue and found other people with the same problem but there aren't any real solutions. There are no warnings or errors when configuring/compiling readline-6.1. I am pretty stumped, any help/insight would be greatly appreciated. Thanks ahead of time.

    Read the article

  • How can I enable http auth in lighttpd for all directories except one?

    - by Nuri Hodges
    I am trying to authenticate access to everything in webroot (/) except anything that resides in a particular directory (/directory/) and I've tried both of these options to no avail: $HTTP["url"] =~ "^(?!(/directory))" { auth.require = ( "" => ( "method" => "basic", "realm" => "auth to this area", "require" => "user=username" ) ) } $HTTP["url"] != "/directory" { auth.require = ( "" => ( "method" => "basic", "realm" => "auth to this area", "require" => "user=username" ) ) }

    Read the article

  • How to browse Windows XP from Mac Finder when name disappears from finder

    - by Chris
    Occasionally, like right at this moment, I cannot access my windows share from my mac. Normally, it works, but every now and then, the computer name won't be displayed under SHARED in Finder. Rebooting the windows computer usually fixes this, but it's inconvenient. The Windows computer can see the Mac on the network. Is there a method of asking finder to poll for windows shares again, or "forcing" finder to look for "desktop"? I'm looking for the equivalent of \desktop in the address bar of a Windows computer. Thanks for the help.

    Read the article

  • How to connect ftp server outside lan?

    - by srisar
    hi all , im setting up home ftp server, so i can share some files with my friends outside my lan. I am using filezilla server and everything configured. http://www.canyouseeme.org/ even see my port 21 as opend, but when i connect through fit client or through web browser, its saying "530 User saravana access denied." how can i solve this problem, i checked the user name and password, everything is good, but i didnt sent any passive mode, (i didnt know how to set), if that is causing the trouble can anyone help me, bu the way i can connect locally through localhost.

    Read the article

  • Slow internet using Arch Linux

    - by GZaidman
    after a week or so of using Arch Linux I cant access the internet - it takes around 5 mins to load google (most of the other websites just give me a timeout), pacman's downloading speed range between 5-2Kbs, and pinging google takes around 9Kms. I'm connected using wireless network (wifi card is Intel Ultimate 6300 and router is Edimax 6524n). Every other Windows machine that's connected to the network (and even the T410 running Windows) is fine, so the problem lies in Linux. So far, i checked the resolv.conf file (my router ip address is listed), and the hosts file (pretty much default), and I disabled the ipv6 module. None of that helped. PS: i'm using NetworkManager (but the problem still occurs when connecting using wicd) running on Gnome3. Thanks in advance for any help you can provide! EDIT: something really strange happens whenever I ping google: i get an unknown host 'google.com', but the bit rate from the card jumps at the exact second I ping google (so far, the bit rate jumped to 54Mb/s from 1Mb/s over the course of 4 pings).

    Read the article

  • How can I uninstall AppFabric?

    - by downatone
    I recently upgraded from Win 7 - Win 8. AppFabric was not one of the programs that came up as being incompatible when I ran the upgrade wizard, so I did not uninstall it. Now whenever I goto "Add or remove features" I get the following error: --------------------------- Windows Server AppFabric Setup Wizard --------------------------- Windows Server AppFabric is not supported on current operating system Windows 8 Pro(version 6.2.9200.0). Please refer to installation guide for the list of supported operating systems. --------------------------- OK --------------------------- Unfortunately the only way to uninstall AppFabric is via the "Add or remove features" - does anyone know a commandline command to kick the uninstall so I can at least access "Add or remove features"? Edit: C:\Windows\System32\AppFabric\Setup.exe /remove Throws the same error as above.

    Read the article

  • How to disable Tcp/Ip settings in windows 7 via GPO?

    - by Akash Kava
    I have enabled following policies, "Prohibit TCP/IP advanced connection" "Prohibit access to properties of components of a LAN connection" "Enable Windows 2000 Network Connections setings for Administrators" after doing all these, all machines running windows xp, 2000 and vista have network settings properties button disabled as expected. However all machines running windows 7 have no effect, I believe there are few more steps, all Windows 7 machines are on domain and we want to control this via Domain Controler's GPO. Please let me know, what I need to do to have Windows 7 disable the properties of network connection, I am not network expert, I read few articles about what new has been added in GPO of windows 7 but I am blank. Everything works fine on Windows XP, Vista, 2003 Server. Only Windows 7 is a problem.

    Read the article

  • Error installing Arch Linux

    - by Garethj94
    So, I am trying to install Arch Linux on my Acer Aspire 4830tg, but I keep running into problems. Some background knowledge, I am trying to install Arch off a usb stick and I got the iso image using bittorrent. I am also trying to install it alongside of Windows 8 (which is already installed). So when I boot into Arch linux I get this error: :: Mounting '/dev/disk/by-label/ARCH_201212' to 'run/archiso/bootmnt' Waiting 30 seconds for device /dev/disk/by-label/ARCH_201212 ... ERROR: '/dev/disk/by-label/ARCH_201212' device did not show up after 30 seconds... Falling back to interactive prompt You can try to fix the problem manually, log out when you are finished sh: can't access tty; job control turned off I know that it will work if I run it on a virtual machine but whenever I try to install it on my laptop I keep getting this error. And since you can't register for the Arch forums without a Arch terminal to run their captcha command I can't ask this on their forums.

    Read the article

  • Can connect to DNS addresses typed in the URL but not by IP addresses

    - by Ben
    I just changed over my modem to bridged mode, and changed my wireless router to PPPoE. My PC IP address is reserved and forwards port 80 to my computer's IP address based on my MAC address. I have a problem, however. I cannot access my local webserver by public IP address or my router 192.168.0.1 wirelessly from any other computer or iPad. I can, however, connect by this PC which is connected to the wireless router via ethernet. Via wireless, it says it cannot connect, however DNS addresses work (e.g. google.com, etc.) Any ideas?

    Read the article

  • Should DKIM signing happen in the application or the MTA?

    - by thomasrutter
    I'm trying to weigh up whether DKIM signing should be done by the application sending mail (for instance, the mailing list software you're using) or at the mail transfer agent (sendmail, postfix etc). Do you know any good arguments either way? As far as I can see, doing it at the MTA, such as with dkim-milter, is a lot easier to set up. However, if anyone gets access to the server, even just a normal unprivileged account such as a web hosting client's login, they'd be able to send email using sendmail and get the full blessing of my DKIM signature. What do you think is the best solution for my situation? I'm using a Debian server with apache, postfix, php&mysql, etc.

    Read the article

  • Wake On Lan for Fedora 12

    - by Toymakerii
    I have a fedora 12 box that I am using as a sandbox for web development and a few other toys. The box gets really hot so I would like it to sleep/hibernate when no one is using it, however most of the people connecting to the box will not be able to access it physically. Is it possible to set up a Wake On Lan that wakes up the machine when it detects an SSH connection? A google search didn't yield much information. (or atleast I wasn't smart enough to register it as useful!)

    Read the article

< Previous Page | 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150  | Next Page >