Search Results

Search found 30524 results on 1221 pages for 'display errors'.

Page 391/1221 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • Centos 5.5 [Read-only file system] issue after rebooting

    - by canu johann
    I have a virtual server under centos 5.5 (hosted by a japanese company called sakura ) Since yesterday, connection through ssh couldn't be established. I've contacted support center who told me to restart VS from the control panel. After restarting, I got the message below Connected to domain wwwxxxxxx.sakura.ne.jp Escape character is ^] [ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) @@cat: /proc/self/attr/current: Invalid argument Welcome to CentOS Starting udev: @[ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. *** Warning -- SELinux is active *** Disabling security enforcement for system recovery. *** Run 'setenforce 1' to reenable. /etc/rc.d/rc.sysinit: line 53: /selinux/enforce: Read-only file system Give root password for maintenance (or type Control-D to continue): bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system (Repair filesystem) 1 # setenforce 1 setenforce: SELinux is disabled (Repair filesystem) 2 # echo 1 (Repair filesystem) 4 # /etc/init.d/sshd status openssh-daemon is stopped (Repair filesystem) 5 # /etc/init.d/sshd start Starting sshd: NET: Registered protocol family 10 lo: Disabled Privacy Extensions touch: cannot touch `/var/lock/subsys/sshd': Read-only file system (Repair filesystem) 6 # sudo /etc/init.d/sshd start sudo: sorry, you must have a tty to run sudo (Repair filesystem) 7 # I have 4 site in production and I need to restart the server quickly (SSH + HTTPD ,...). Thank you for your time.

    Read the article

  • Colors are not displayed correctly in GVim on some computers

    - by MARTIN Damien
    I try to use a colorscheme. On my desktop it looks like how it should be : https://github.com/martin-damien/tetrisity-vim/blob/master/tetrisity-vim.png But on my laptop, I have the following colors : http://img703.imageshack.us/img703/8444/errorufl.png Has you can see the most simple and visible point is in comments. The should be grey on black and they finaly are blue on transparent. What could make such errors ?

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Fix/Bypass "Cannot connect to the real website-blocked" error in Google Chrome with OpenDNS blocking

    - by George H
    I have a large problem with Chrome in my organisation. I use DNS to manage web site blocking, for sites which are not appropriate and are potentially a risk to the organisation where I do this. I only want to use Chrome over the network, as Internet Explorer has compatibility problems with some sites that we use (We cannot change this either or use different sites). Therefore using internet explorer is not a solution. I do not want to install a different browser, for multiple reasons. Mainly because of the difficulty of rewriting the customised add-ons that we use. However, recently, I have had lots of problems with Chrome SSL Errors. I cannot use my custom OpenDNS block pages, which uses the contact form to request an unblocking. Chrome often blocks OpenDNS for sites (a good example is Facebook) that request HTTPS. Some sites like https://internetbadguys.com (OpenDNS example) This means that chrome refuses to load the blocking page, explaining that the site is blocked. Instead they often call IT support, but they want a solution, as they are sick of getting lots of SSL errors. I have tried looking into ways to turning this off. I have tried: Typing "proceed". That didn't work. Typing "proceed", pressing enter. Didn't work I cannot find phishing and anti-malware any more in Chrome, from the internet guides. Not using HTTPS. However there is an automatic redirect to HTTPS on most sites. Therefore the error keeps coming up. Checking my clocks. They were correct. Does anyone have an idea on how to disabling, bypassing or working around this "feature"? EDIT: This is an example what I am talking about - I found that on google images. I do not block google. EDIT 2: My clocks are correct. I cannot stop using OpenDNS either. EDIT 3: My question is: How do I stop chrome from refusing to load pages that are blocked by OpenDNS, where the server has explicitly requested HTTPS.

    Read the article

  • What is wrong with my .Htaccess file? I'm trying to redirect permanently my whole site to the index.

    - by SocialAddict
    This is giving me a 500 internal server error. Any suggestions? I have tried various examples but I think I'm missing something... RewriteEngine On RewriteCond %{request_uri}!^ /index\.htm RewriteRule ^(.*) /index\.htm [R=permanent,L] It displays the homepage if I navigate there but anything that meets the conditions (all appart from index.htm gives the server 500) EDIT: with the above code it now doesnt give any 500 errors but it doesnt redirect for any pages

    Read the article

  • Windows not remembering default audio device?

    - by Lynda
    I prefer the audio output on my computer to use the standard audio jack output due to volume issues. But I am using a monitor with HDMI. I have chosen to set the default audio device to be "Speakers" But every time I reboot the default audio device is the HDMI Output again. I am running Windows 7 64bit. Why does it not remember the default device? (I do shutdown and boot up properly without errors.)

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • Website and Web Services Monitoring

    - by JohnyD
    I'm looking for a good product (either stand-alone or remote) that will monitor our online services. We're in a transition period between new asp.net and old foxpro / cgi and because of this experience errors that can make our services unresponsive. Can anyone suggest internal/external tools, subscription based solutions, or anything else that I can put in place to monitor all aspects of our web services? Thank you

    Read the article

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

  • Configuring three monitors with two Radeon X1600/X1650 graphics cards under Ubuntu

    - by cpm
    I have three SyncMaster 932a monitors I want to use with two Radeon X1600/X1650 cards under Linux. I am running X.org X Server 1.6.0, as provided by Ubuntu's Wubi installer. After turning off mirroring, I ended up with this xorg.conf: Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Device" Identifier "Configured Video Device" EndSection The left monitor had a menu bar and a task bar, the center monitor was just desktop, and windows would maximize to the current monitor. The third monitor and second graphics card weren't being used at all. Then I changed my configuration to manually specify each card with their PCI bus: Section "ServerLayout" Identifier "TheLayout" Screen 0 "Radeon Screen 1" Screen 1 "Radeon Screen 2" RightOf "Radeon Screen 1" EndSection Section "Screen" Identifier "Radeon Screen 1" Monitor "Configured Monitor" Device "Radeon the First" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Screen" Identifier "Radeon Screen 2" Monitor "Configured Monitor" Device "Radeon the Second" EndSection Section "Device" Identifier "Radeon the First" Driver "radeon" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Radeon the Second" Driver "radeon" BusID "PCI:2:0:0" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Now both the left and right monitors have task bars and menu bars. Windows cannot be dragged from the first two monitors to the third monitor. Also, maximizing in the left or center window fills both monitors. I also tried adding Option "Xinerama" "true" to the ServerLayout section. X11 wasn't able to start up. I want to: Allow moving windows along all three monitors. Maximizing only fills the current monitor. Either have menu/task bars on only the left monitor or all three monitors How can I make this possible?

    Read the article

  • Problems connecting Clariion LUNS to Solaris 10

    - by vialde
    I've got a Clariion San and a Solaris 10 server with an emulex HBA. The Thin LUNS are visible in Solaris and I can happily format the raw devices. Unfortunately that's all I can do. All other operations result in I/O errors. I'm running current versions of PowerPath and Solaris is patched as high as it'll go. Anyone have any similar experiences?

    Read the article

  • Starting up Ion3 in Ubuntu

    - by Masi
    How can you start up Ion3? I installed Ion by apt-get. I have the exec ion in my .xinitrc. I get >> Unable to redirect root window events for screen 0. >> Refusing to start due to encountered errors.

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • NoMachine NX window closes after establishing connection

    - by blackicecube
    I am trying to use nomachine nx server and client. But somehow it doen't work. What happens is the following: Client starts up Client authenticates with Server The NoMachine window appears for 2-4 seconds The NoMachine window exists Somehow a "closeEvent" is sent. Here's what I see in the log file: [Thu Sep 24 11:20:37 2009]: Starting nxcomp with options: 'NX 299 Switch connection to: NX mode: unencrypted options: nx/nx,options=/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/options:1022'. [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor: opened file: [/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/session] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[246] str=[Initializing X protocol compression] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Initializing X protocol compression] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[247] str=[Established the display connection] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Established the display connection] [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: QClipboard: Unknown SelectionClear event received. [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: LoginDialog: Agent found closing windows... [Thu Sep 24 11:20:38 2009]: LoginDialog: setting automatic reconnection to true. [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: LoginDialog: closeEvent received! [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called begin [Thu Sep 24 11:20:38 2009]: LoginDialog: stopAllTimers [Thu Sep 24 11:20:38 2009]: LoginDialog: stopProgressTimer [Thu Sep 24 11:20:38 2009]: Utility::getPreferencesFile: 'nxclient' - '/home/foo/.nx/config/nxclient.cfg' [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Called destructor for protocol class [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called end Anyone with a helpful idea?

    Read the article

  • Facing problem in configuring Reporting Server

    - by idrees99
    Dear All, I am unable to configure reporting server with sql server 2005 express edition. I have posted the link of a screen shot which shows the status.when ever i go to configure the reporting services it gives me the following errors(see screen shot)...also unable to start the reporting services.It starts and then stopped automatically.... I am using windowsxp professional..... Need help... thanks. ![alt text][1] http://www.freeimagehosting.net/image.php?8977c7f37a.jpg

    Read the article

  • Apache server randomly stopped working yesterday. How can I fix it?

    - by Clueless
    I use Apache version 2.2.11 through WampServer to host a website. Every time during the last week I tried to connect to localhost it went through and everything was fine. Now all of a sudden I get errors. I didn't change the configuration or anything as far as I know, it just stopped working for no apparent reason. As I said before, it's been working fine. Any way I can get it back up and working again?

    Read the article

  • Does BitLocker reduce write reliability?

    - by Unsigned
    For the purposes of this question, BitLocker refers to the BitLocker-to-go variety on a disk with write-caching disabled. NTFS supports metadata journaling, which, although not completely failsafe, does mitigate certain types of potential filesystem errors. Assuming an NTFS volume is protected with BitLocker, does this reduce the failure tolerance? Would a power failure during a write to an NTFS volume, that's protected with BitLocker, be more prone to corruption than on an unencrypted NTFS volume?

    Read the article

  • starting oracle database automatically.

    - by Searock
    I am using Fedora 8 and Oracle 10g Express Edition. Every time I start my fedora I have to click on start database. How can I add startdb.sh to startup so that it automatically executes when Fedora starts? I have tried adding the path to /etc/rc.d/rc.local but it still doesn't work. ./usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/scripts/startdb.sh I have even tried to add this script in /etc/init.d/oracle #!/bin/bash # # Run-level Startup script for the Oracle Instance and Listener # # chkconfig: 345 91 19 # description: Startup/Shutdown Oracle listener and instance ORA_HOME="/u01/app/oracle/product/9.2.0.1.0" ORA_OWNR="oracle" # if the executables do not exist -- display error if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ] then echo "Oracle startup: cannot start" exit 1 fi # depending on parameter -- startup, shutdown, restart # of the instance and listener or usage display case "$1" in start) # Oracle listener and instance startup echo -n "Starting Oracle: " su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start" su - $ORA_OWNR -c $ORA_HOME/bin/dbstart touch /var/lock/subsys/oracle echo "OK" ;; stop) # Oracle listener and instance shutdown echo -n "Shutdown Oracle: " su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop" su - $ORA_OWNR -c $ORA_HOME/bin/dbshut rm -f /var/lock/subsys/oracle echo "OK" ;; reload|restart) $0 stop $0 start ;; *) echo "Usage: $0 start|stop|restart|reload" exit 1 esac exit 0 and even this doesn't work. startdb.sh is located at /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/scripts/startdb.sh Thanks.

    Read the article

  • Renaming IIS Website names

    - by IIS Newb
    I'm wanting to rename some websites in IIS for organization purposes. I assume that the name is just meta data and won't cause any errors or problems but I'm not sure. Is there anything that relies on the website name to be unchanged? SSL certs maybe? I know each site has an id in the meta base and I assume that is all that's needed to identify the site programmaticly.

    Read the article

  • Ubuntu 10.04 Lucid on IBM Thinkpad T41

    - by naugtur
    I tried to boot my T41 from a Ubuntu Lucid liveCD and it worried me. The white text on violet background had green pixels around it. After the splashscreen went away Ubuntu popped an alert that the installer had some errors and will now run live. The live system worked fine I guess... Did anybody experience such behaviour?

    Read the article

  • dlink arp spoofing prevention

    - by Wiploo
    someone can help me understanding arp spoofing prevention on dlink dgs-3100 (ftp://ftp2.dlink.com/PRODUCTS/DGS-3100-48P/REVA/DGS-3100-48P_MANUAL_3.60_EN.PDF). I'd like to protect my gateway MAC/IP from spoofing so I'have tryed to add a rule "IP: 192.168.1.1 MAC: aa-aa-aa-aa-aa-aa" flagging all the port of the switch as untrusted. When I apply the rule I lose connection to all pc attached to the switch. I certanly made some errors, but I can't understand what is wrong. Best Regards

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >