Search Results

Search found 42629 results on 1706 pages for 'dry run'.

Page 469/1706 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • MySQL extension of PHP not working

    - by Víctor
    In a Debian server, and after intallation and removal of SquirrelMail (with some downgrade and upgrade of php5, mysql...) the MySQL extension of PHP has stopped working. I have php5-mysql installed, and when I try to connect to a database through php-cli, i connect successfully, but when I try to connect from a web served by Apache I cannot connect. This script, run by php5-cli: echo phpinfo(); $link = mysql_connect('localhost', 'user, 'password'); if (!$link) { die('Could not connect: ' . mysql_error()); } echo 'Connected successfully'; mysql_close($link); Prints the phpinfo, which includes "/etc/php5/cli/conf.d/mysql.ini", and also the MySQL section with all the configuration: SOCKET, LIBS... And then it prints "Connectes successfully". But when run by apache accessed by web browser, it displays the phpinfo, which includes "/etc/php5/apache2/conf.d/mysql.ini", but has the MySQL section missing, and the script dies printing "Fatal error: Call to undefined function mysql_connect()". Note that both "/etc/php5/cli/conf.d/mysql.ini" and "/etc/php5/apache2/conf.d/mysql.ini" are in fact the same configuration, because I have in debian the structure: /etc/php5/apache2 /etc/php5/cgi /etc/php5/cli /etc/php5/conf.d And both point at the same directory: /etc/php5/apache2/conf.d -> ../conf.d /etc/php5/cli -> ../conf.d Where /etc/php5/conf.d/mysql.ini consists of one line: extension=mysql.so So my question is: why is the MySQL extension for PHP not working if I have the configuration included just in the same way as in php-cli, which is working? Thanks a lot!

    Read the article

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • Which DNS settings are used when setting up your server

    - by Saif Bechan
    I have a server and want to run my own name server service. Now I have set it up already and it works not, but I do not know where the exact settings are stored. On my server I use Plesk. When I edit DNS settings there I think it is stored in named.conf. Named is installed on the server, and BIND. Now I also have a panel from my registrar. This is separate from my server. Both places I can add the normal MX,A,CNAME, etc records. Now where is the best way to place this settings. Currently I have the same records on both places, on the server and at the registrar panel. I am correct to just add all the records at the registrar panel, and remove everything from within PLESK, and just don't run DNS on my server, because it is already done in the registrar panel. Or should I add the records in both places.

    Read the article

  • Wildcard SSL and Apache configuration

    - by Nitai
    Hi all, I'm pulling my hard on this configuration, which probably is simply. I have a wildcard ssl certificate which is working. I have the website setup to run on domain.com under SSL. Now, I'm in need to run many subdomains (*.domain.com) on the same server with the same SSL certificate. Shouldn't be that hard, right? Well, I can't get it going. Point is, that the first config is another Tomcat server that serves another site and listens to domain.com and www.domain.com. The other config listens to *.domain.com and pulls the content from another Tomcat server. I already tried this whole setup with mod_rewrite, but simply don't see what I'm doing wrong. Any help very much appreciated. Here is my conf in Apache 2.2: <VirtualHost *:443> SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... ServerName domain.com ServerAlias www.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://localhost:8010/ ProxyPassReverse / ajp://localhost:8010/ </VirtualHost> <VirtualHost *:443> SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... ServerName domain.com ServerAlias *.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ </VirtualHost> Thanks.

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Bridging my laptop's wireless and wired adaptors

    - by stacey.richards
    I would like to be able to connect a desktop computer that does not have a wireless adapter to my wireless network. I could just run a network cable from my ADSL/wireless router to the desktop computer but sometimes this is not practical. What I would really like to do is bridge my laptop's wireless and wired adapters in such a way that I can run a network cable from my laptop to a switch and another network cable from the switch to a desktop computer so that the desktop computer can access the Internet through my ADSL/wireless router via my latop: +--------------------+ |ADSL/wireless router| +--------------------+ | +-------------------------+ |laptop's wireless adaptor| | | |laptop's wired adaptor | +-------------------------+ | +------+ |switch| +------+ | +-----------------------+ |desktop's wired adapter| +-----------------------+ A bit of Googling suggests that I can do this by bridging my laptop's wireless and wired adapters. In Windows XP's Network Connections I select both the Local Area Connection and the Wireless Network Connection, right click and select Bridge Connections. From what I gather, this (layer 2?) bridge will examine the MAC address of traffic coming from the wireless network and pass it through to the wired network if it suspects that a network adapter with that MAC address may be on the wired side, and vice-versa. If this is the case, I would assume that when the desktop computer attempts to get an IP address from a DHCP server (which is running on the ADSL/wireless router), it would send a DHCP broadcast packet which would pass through the laptop's bridge to the router and the reply would return through the laptop's bridge back to the desktop. This doesn't happen. With some more Googling I find some instruction how this can be done with Linux. I reboot to Ubuntu 9.10 and type the following: sudo apt-get install bridge-utils sudo brctl addbr br0 sudo brctl addif br0 wlan0 sudo brctl addif br0 eth0 sudo ipconfig wlan0 0.0.0.0 sudo ipconfig eth0 0.0.0.0 Once again, the desktop cannot reach the ADSL/wireless router. I suspect that I'm missing some simple important step. Can anyone shed some light on this for me?

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • GNU screen cannot find terminfo entry on HP-UX

    - by Ency
    I am trying to make screen work on HP-UX B.11.23 U ia64 0308561483 unlimited-user license. Please notice I do not have root access. I have already compiled screen successfully, configured with LIBS=-lcurses. When I try to start screen it wrotes Cannot find terminfo entry for 'xterm'. But there ARE terminfos for the terminal type in screen-4.0.3> ls -a /usr/share/lib/terminfo/x/ . .. x-hpterm x1700 x1720 x1750 xitex xl83 xterm xterms I thing the problem may be there are in non-standard path, because according to man page standard path is /usr/lib/terminfo/?/* What I tried: But as I said I do not have root access so cant make symlink, anyway I tried run screen with filled TERMINFO_DIRS (TERMINFO_DIRS=/usr/share/lib/terminfo/x/ ./screen and TERMINFO_DIRS=/usr/share/lib/terminfo/ ./screen) but none of them work - same error. Change TERM to different values - same error Cannot find terminfo entry for <WHATEVER WHAT WAS IN TERM VAR>. Put something into screenrc and run ./screen -c screenrc screen-4.0.3> cat screenrc attrcolor b ".I" term xterm termcap xterm* LP:hs@ termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' defbce "on" But no luck so far, have you got any suggestions? Need some additional information, let me know.

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, COM, or otherwise (even Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • How to pass alias through sudo

    - by Tanktalus
    I have an alias that passes in some parameters to a tool that I use often. Sometimes I run as myself, sometimes under sudo. Unfortunately, of course, sudo doesn't recognise the alias. Does anyone have a hint on how to pass the alias through? In this case, I have a bunch of options for perl when I'm debugging: alias pd='perl -Ilib -I/home/myuser/lib -d' Sometimes, I have to debug my tools as root, so, instead of running: pd ./mytool --some params I need to run it under sudo. I've tried many ways: sudo eval $(alias pd)\; pd ./mytool --some params sudo $(alias pd)\; pd ./mytool --some params sudo bash -c "$(alias pd)\; pd ./mytool --some params" sudo bash -c "$(alias pd); pd ./mytool --some params" sudo bash -c eval\ "$(alias pd)\; pd ./mytool --some params" sudo bash -c eval\ "'$(alias pd)\; pd ./mytool --some params'" I was hoping for a nice, concise way to ensure that my current pd alias was fully used (in case I need to tweak it later), though some of my attempts weren't concise at all. My last resort is to put it into a shell script and put that somewhere that sudo will be able to find. But aliases are soooo handy sometimes, so it is a last resort.

    Read the article

  • installed mysql using yum but mysqld dowsnt start: Fedora 16

    - by Sumit Singh Bir
    i installed mysql mysql-server and mysql-libs ... after installation i thought to start the mysql service using command systemctl start mysqld.service Failed to issue method call: Unit mysqld.service failed to load: Invalid argument. See system logs and 'systemctl status mysqld.service' for details. systemctl status mysqld.service this said that it had an invalid argument then i changed the content of mysqld.service with this File: /etc/systemd/system/mysqld.service [Unit] Description=MySQL database server After=syslog.target After=network.target [Service] User=mysql Group=mysql ExecStart=/usr/sbin/mysqld --pid-file=/var/run/mysqld/mysqld.pid ExecStop=/bin/kill -15 $MAINPID PIDFile=/var/run/mysqld/mysqld.pid # We rely on systemd, not mysqld_safe, to restart mysqld if it dies Restart=always # Place temp files in a secure directory, not /tmp PrivateTmp=true [Install] WantedBy=multi-user.target now i found that the error for invalid argument was resolved and mysqld.service was loaded but not enabled ... using systemctl start mysqld.service worked fine ... it workedddd! but then enabling it systemctl enable mysqld.service or service mysqld start did not do anything but the curdor kept on blinking after i pressed enter ... now the thing is that for this f16 i wasted my HDD space for development work and i cannot figure out a way ... please somebody help

    Read the article

  • Why java -version returning a different version than the one defined in JAVA_HOME?

    - by Shekhar
    I am trying to set JAVA_HOME in Ubuntu OS. I have copied jdk 1.7 in /usr/lib/jvm and set JAVA_HOME in /etc/profile file. Contents of /usr/lib/jvm folder are as follows : shekhar@ubuntu:~$ ls /usr/lib/jvm/ default-java java-1.6.0-openjdk java-6-openjdk java-6-openjdk-i386 jdk1.7.0_01 java-1.5.0-gcj-4.6 java-1.6.0-openjdk-i386 java-6-openjdk-common java-7-openjdk-i386 and last few lines of /etc/profile file are as follows : export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_01 export PATH=$PATH:$JAVA_HOME/bin After finishing all this when I run java -version command I get following output : shekhar@ubuntu:~$ java -version java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK Server VM (build 20.0-b12, mixed mode) and when I run ls -lah command I get following output : shekhar@ubuntu:~$ ls -lah /usr/bin/java lrwxrwxrwx 1 root root 22 Sep 29 09:58 /usr/bin/java -> /etc/alternatives/java shekhar@ubuntu:~$ ls -lah /etc/alternatives/java lrwxrwxrwx 1 root root 45 Sep 29 09:58 /etc/alternatives/java -> /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java Can anyone please tell me which thing I am missing? Why Ubuntu is still pointing to open jdk and not to my jdk 7? PS : I have seen this similar question and its answers but that question is related to Windows OS and not for Ubuntu so I am reposting this similar question for Ubuntu.

    Read the article

  • Software way to cool down an old MacBook Pro

    - by notMacBookProSuperUser
    Hi all, First a little background: I've got lots of computers, including Linux PCs and two MacBook Pro (and a MacMini). My concern is with my 'old' MacBookPro (Core Duo). It really does overheat. Warranty is long void. Years ago (I'd say 2.5 years ago or so) one day it overheated so bad that the battery inflated due to the heat. I got a new battery for free but it's still getting incredibly hot (much other than any other computer I've got: my newer Core 2 Duo MacBook Pro doesn't get nearly as hot as the old one. It s really a pain because I use my old MBP when I m in front of TV, having it on my lap, and it can really become unbearable. I don't want to open that old MBP. On Linux I can force a new CPU 'governor' that decides how the CPU is allowed to operate: it can be 'on demand', 'always max speed', 'always speed x', etc. Does the same exist under MacOS X? Is there a way, say if a 1.86 Ghz Core Duo can run at 1.6 Ghz, to ask MacOS X: "never run this CPU above 1.6 Ghz" ?

    Read the article

  • How do I remove encryption from a VMware Workstation 7 image?

    - by Chad
    I successfully encrypted a VM image and confirmed it still runs. I then closed the VM and reopened it and confirmed the encryption password was valid and worked. However, now I want to un-encrypt the VM. When I choose that option, it asks for "your password". I assume this means the password I created when I encrypted it. It doesn't work. I can still open the VM with the password and run it. But, it refuses to remove the encryption using that password. Am I missing something? Is there a password that I don't know about? Some details: I created this image (using standalone converter; physical machine source) I converted it to ACE Converted back to a normal VM (un-ACE'd it) Encrypted it Cannot remove the encryption but can open it and run it As you can see... I am exploring the VMware features. Thanks for any guidance you can give.

    Read the article

  • How to configure a trusted connection between IIS 7 and SQL Server 2005?

    - by user1180652
    How do configure a trusted connection between IIS 7 and SQL Server 2005? My webapp was working fine with Windows Authentication enabled in IIS. Now, in order to solve a problem, we need to use a trusted connection. Unfortunately, enabling the trusted connection in the web.config broke the webapp. Oddly enough, when I run this application with trusted connection from my local dev machine (using the Cassini web server) IIS (Windows Server 2008) is running on one machine. The database (SQL Server 2005 but could migrate to 2008) is running on another machine. We are on a Windows domain running AD. All traffic is within our own firewall - no public access. Beyond that, I can't provide much info but I can find it. We're very "compartmentalized" (we have server people, security people, oracle people, SQL Server people, etc.) Thanks! Update 02/14/2012 0902: The webapp is now functional (app no longer broken) but the main issue is still unresolved. Now I have the app's application pool running as a domain account with permissions on the SQL Server box and IIS box. We were using this account to run the application but, and here's the problem, we need to log the real user name that made a change. When using the service account, the name of that service account appeared in the audit tables, making the auditing quite useless. So, not I'm at least running again. The connection string in the web.config is using "Trusted_Connection=True", the appPool is using a domain account with access to both boxes, BUT when I make a change (logged in as me) the name of the service account (appPool identity) is still logged in the audit tables. I also manually granted full permissions to the service account on the webapp folder. What do I need to do in order to log my name, not the service account, in the audit tables? Everything I'm reading says I need to establish a trusted connection between the two servers.

    Read the article

  • Xmonad on windows laptop

    - by Kevin L.
    I'm a Linux developer in the market for a laptop. 90% of my time is spent in Emacs, the terminal, and Google Chrome, and I want to use them within the excellent Xmonad tiling windows manager. Given these constraints, I can only see two options: Run Linux on a laptop Run Windows on the laptop, and spend all of my time working within a Linux VM. Years of experience suggest that the first option will take many frustrating hours and probably be suboptimal w.r.t. battery life, wifi, and fn keys like screen brightness or audio adjustment. For the second option, what would be the ideal setup? I've had a lot of luck with Cooperative Linux on my Samsung NC-10 netbook (Windows XP), but I would have to setup the X11 server myself. What about using VirtualBox (which includes the guest VM's GUI)? Has anyone tried this? Hardware-wise, I'm looking for something in the "Macbook Air killer" category; Samsung Series 9 laptop, Lenovo IdeaPad U300s, &c. (i.e., matte screen, 5h+ battery life, 3ish pound weight). Price is not a consideration; any suggestions?

    Read the article

  • Get Internal IP Address From DHCP Hostname

    - by ell
    I would like to try and get an internal ip address of one of the computers on my network. The reason for this is I have a little home server box downstairs but every time I want to SSH into it I have to open my router configuration and go on the DHCP client table and look at the IP address. For example I would like to be able to go ssh ell-sever instead of ssh 192.168.1.105 or whatever it happens to be. My network configuration is like so: Router downstairs that is connected to the Internet and is running a DHCP server My server computer (ell-server) is a headless pc connected to the router via ethernet cable. Running Ubuntu 11.04 Server Edition My laptop upstairs (ell-laptop) that is running Ubuntu 11.10 Desktop Edition connected wirelessly Other (irrelevant) computers - 2 x Windows XP, 1 x Xubuntu - all connected with cables. (It seemed to me the method of connection isn't useful information but I put it in anyway - just in case. If I have missed any information please tell me) Do I have to run a DNS server on one of my computers? If so which one? And does that mean I will have to run a DDNS client on each computer? Thanks in advance, ell.

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • filter / directing URLs coming onto a network

    - by Jon
    Hi all, I an not sure if this is possible or not but what i would like to do is as follows: I have one IP address (dynamic using zoneedit.com to keep it upto date). I have one webserver running my main site which is an Ubuntu machine running Apache. I also have a windows 2008 server running another site. Just to confuse things I also run part of my Apache site on the windows server, currently using proxypassreverse to get the information from it. So it looks something like this: IP 1.2.3.4 maps to mydomain.com as well as myotherdomain.com All requests that come into port 80 are forwarded to the Apache box and I use Virtualhost settings to proxy the windows sites where needed. so mydomain.com is an Apache site mydomain.com/mywindowssection is the Apache server using proxypassreverse to get part of the site from the Windows server myotherdomain.com uses Apache and proxypassreverse to get the whole site. What I would like to be able to do is forward all http requests that come into my network to one machine that figures out who should be serving that content. so: mydomain.com would go to the Apache machine myotherdomain.com would go the windows machine. I am just in the process of setting up an Astaro gateway (never done this before so taking a while to configure) as my firewall, dns, dhcp etc, don't know if this can handle it. I have the capacity to run a VM on the network if a seperate box would be needed for this process as well. Thanks for any and all feedback. Jon

    Read the article

  • What was SPX from the IPX/SPX stack ever used for?

    - by Kumba
    Been trying to learn about older networking protocols a bit, and figured that I would start with IPX/SPX. So I built two MS-DOS virtual machines in VirtualBox, and got IPX communications working (after much trial and error). The idea being to get several old DOS games to run, link up to a multiplayer match, interact with each game window, and capture the traffic using Wireshark from the host machine. From this, I got Quake, Masters of Orion 2, and MechWarrior 2 to communicate back and forth. Doom, Doom2, Duke3d, Warcraft, and several others either buggered up under the VM or just couldn't see the other VM on the IPX network. What did I discover? None of the working games used SPX. Not even Microsoft's NET DIAG used SPX. They all ran ONLY on top of IPX. I can't even find SPX examples or use-cases of SPX traffic running over IEEE 802.3 Ethernet II framing. I did find references that it was in abundant use on token ring, but that's it. Yet any IPX-aware application that I've hunted down so far usually advertises itself as "IPX/SPX", which seems to be a bit of a misnomer, since it doesn't seem to use SPX. So what was SPX used for? Any DOS applications out there that use it which will run under my VM setup? Edit: I am aware that IPX is to SPX as IP is to TCP (layer 3 to layer 4), so I expected to see an SPX layer underneath the IPX layer in Wireshark when I ran my tests.

    Read the article

  • Is there a program to show programs loading during the boot process in real time?

    - by Gary M. Mugford
    Hi all, There are any number of programs that will show me WHAT will run during the boot process for Windows XP. I've always been partial to Mike Lin's version but there are several others, some of which are quite possibly superior. That's not the issue. What I'd really like is a program that would load first and then would list the programs that were about to load and then check them off as the programs loaded. This isn't something I necessarily need for myself. But certain family members get click happy as soon as they see the icon they eventually want to run and end up clicking on it. THIRTY TWO TIMES in one memorable crash-inducing spasm. If there was some way for 'progress' to be shown during the loading of from the various spots Windows auto-loads from, PLUS a BIG BANNER saying "Please do not move the mouse or click on anything until done.", I think I might cut down on my early morning family support calls significantly. I've tried a variety of searches, but I couldn't find the ones that show in real time in the forest of links to programs that will show the list after the fact. Any leads? If not, do any of you who write the after-the-fact listers want to take a shot at producing a utility to do what I think would be a relatively popular utility? Best of the season to all of you and yours. Thanks in advance for any replies, GM

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

  • Server Manager from Windows 2008 to Hyper-V 2008 R2?

    - by Roger Lipscombe
    My workstation is running Windows Server 2008. I do not have local admin privileges. I have a Hyper-V Server 2008 R2 (i.e. Core+Hyper-V) box. On that box, I do have local admin privileges. I can Remote Desktop to the box; Hyper-V Manager works fine (outside of Server Manager). It's just that there are some things that are easier to do in Server Manager (partition disks, etc.) than at the command line. I'd like to use Server Manager on my workstation to manage the Hyper-V box. However: When I run Server Manager on my workstation, it prompts for elevation, and won't then let me connect to another server. If I attempt to run MMC and then add "Server Manager" as a Snap-in, it doesn't prompt me for the server name. Then it complains that I'm not an Administrator. It doesn't provide for connecting to another server. The Remote Server Administration Tools (RSAT) are for Windows Vista and Windows 7 RC. These don't install on Windows 2008.

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >