Search Results

Search found 23346 results on 934 pages for 'clean url'.

Page 595/934 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • Why is windows not able to create a system partition?

    - by hughes
    I'm reinstalling Windows 7 64 bit, and I encountered an issue I've never seen before. I have a legit copy of Win 64 Professional, and I've installed it probably a half dozen times on this machine in the past without a problem. Googling the error only brings me to issues with people who are upgrading to win7. The drive itself seems to not have a problem. I can mount it on other systems and I can create an NTFS partition on it on other machines. I can install Ubuntu on it without any issues. Additionally, if I try using my alternate backup hard drive, the installer gives the same error. I have run diskpart from the setup page and clean seems to report that all is well. However, I cannot get past the screen below, which says Setup was unable to create a new system partition or locate an existing system partition. This happens regardless of whether or not the disk space is already allocated. What is causing this? How do I solve or get past this?

    Read the article

  • Configuring nginx server to handle requests from multiple domains

    - by KillABug
    Use Case:- I am working on a web application which allows to create HTML templates and publish them on amazon S3.Now to publish the websites I use nginx as a proxy server. What the proxy server does is,when a user enters the website URL,I want to identify how to check if the request comes from my application i.e app.mysite.com(This won't change) and route it to apache for regular access,if its coming from some other domain like a regular URL www.mysite.com(This needs to be handled dynamically.Can be random) it goes to the S3 bucket that hosts the template. My current configuration is: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; charset utf-8; keepalive_timeout 65; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay off; Default Server Block to catch undefined host names server { listen 80; server_name app.mysite.com; access_log off; error_log off; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; client_max_body_size 10m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; } } } Load all the sites include /etc/nginx/conf.d/*.conf; Updates as I was not clear enough :- My question is how can I handle both the domains in the config file.My nginx is a proxy server on port 80 on an EC2 instance.This also hosts my application that runs on apache on a differnet port.So any request coming for my application will come from a domain app.mysite.com and I also want to proxy the hosted templates on S3 which are inside a bucket say sites.mysite.com/coolsite.com/index.html.So if someone hits coolsite.com I want to proxy it to the folder sites.mysite.com/coolsite.com/index.html and not to app.syartee.com.Hope I am clear The other server block: # Server for S3 server { # Listen on port 80 for all IPs associated with your machine listen 80; # Catch all other server names server_name _; //I want it to handle other domains then app.mysite.com # This code gets the host without www. in front and places it inside # the $host_without_www variable # If someone requests www.coolsite.com, then $host_without_www will have the value coolsite.com set $host_without_www $host; if ($host ~* www\.(.*)) { set $host_without_www $1; } location / { # This code rewrites the original request, and adds the host without www in front # E.g. if someone requests # /directory/file.ext?param=value # from the coolsite.com site the request is rewritten to # /coolsite.com/directory/file.ext?param=value set $foo 'http://sites.mysite.com'; # echo "$foo"; rewrite ^(.*)$ $foo/$host_without_www$1 break; # The rewritten request is passed to S3 proxy_pass http://sites.mysite.com; include /etc/nginx/proxy_params; } } Also I understand I will have to make the DNS changes in the cname of the domain.I guess I will have to add app.mysite.com under the CNAME of the template domain name?Please correct if wrong. Thank you for your time

    Read the article

  • How can laptop keyboard keys be removed and replaced?

    - by Lord Torgamus
    I'm trying to fix a laptop keyboard that has issues with keys on its left side. Just by feel, it's clear that something sticky got under there. There could be something crunchy too, but that might just be the sound of the key's spring releasing itself from the sticky. I don't know the cause because it's not my computer and the owner isn't sure, but I'm guessing soda spill for now. The computer is an HP dv2500. I've removed the keyboard and blown under it but that hasn't helped. I didn't use compressed air because I just don't have any available, but I suspect it wouldn't help with sticky. So, I'd like to pop they keys off and clean with damp cotton swabs or similar. Is there a proper way to remove the keys? I've found some instructions via Google for non-laptop keyboards, but they don't seem like they'd work for me. Alternate solutions to the problem also welcome, but I've been curious about how to remove the keys for some time for other reasons.

    Read the article

  • How can visiting a webpage infect your computer?

    - by Cybis
    My mother's computer recently became infected with some sort of rootkit. It began when she received an email from a close friend asking her to check out some sort of webpage. I never saw it, but my mother said it was just a blog of some sort, nothing interesting. A few days later, my mother signed in on the PayPal homepage. PayPal gave some sort of security notice which stated that to prevent fraud, they needed some additional personal information. Among some of the more normal information (name, address, etc.), they asked for her SSN and bank PIN! She refused to submit that information and complained to PayPal that they shouldn't ask for it. PayPal said they would never ask for such information and that it wasn't their webpage. There was no such "security notice" when she logged in from a different computer, only from hers. It wasn't a phishing attempt or redirection of some sort, IE clearly showed an SSL connection to https://www.paypal.com/ She remembered that strange email and asked her friend about it - the friend never sent it! Obviously, something on her computer was intercepting the PayPal homepage and that email was the only other strange thing to happen recently. She entrusted me to fix everything. I nuked the computer from orbit since it was the only way to be sure (i.e., reformatted her hard drive and did a clean install). That seemed to work fine. But that got me wondering... my mother didn't download and run anything. There were no weird ActiveX controls running (she's not computer illiterate and knows not to install them), and she only uses webmail (i.e., no Outlook vulnerability). When I think webpages, I think content presentation - JavaScript, HTML, and maybe some Flash. How could that possibly install and execute arbitrary software on your computer? It seems kinda weird/stupid that such vulnerabilities exist.

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • Old scheduled task still being started, but can't find it.

    - by JvO
    System: Windows XP Home Summary: Some scheduled task is still being started by Windows, but I can't find it, nor determine where its configuration has been stored. This is turning into a mystery for me... I set up a Windows XP Home machine to run a task at 7:00 AM, using the Task manager. This was a clean install, no users defined, so you got straight to the desktop after starting the machine. The filesystem uses NTFS. Later on, I needed to introduce users, so I created one (named Sam) with administrator privileges. After this I noticed that the scheduled task failed, most likely due to privilege errors (i.e. can't write to a network drive). So I want to delete the old task, and add it again with the correct user credentials. However.... I can't find the old task!! I know it is still being executed at 7:00 AM, but there's no mention anywhere on the system of this task. I've looked in c:\windows\tasks for .job files, but there's only the "MP Scheduled Scan.job" from Security Essentials. I've searched the whole disk for mention of the batch file that is being run, but can't find it. So why is this old task still running, and more importantly, why can't I find it? Would it have something to do with introducing users on XP?

    Read the article

  • how can i move ext3 partition to the beginning of drive without losing data?

    - by Felipe Alvarez
    I have a 500GB external drive. It had two partitions, each around 250GB. I removed the first partition. I'd like to move the 2nd to the left, so it consumes 100% of the drive. How can this be accomplished without any GUI tools (CLI only)? fdisk Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc80b1f3d Device Boot Start End Blocks Id System /dev/sdd2 29374 60801 252445410 83 Linux parted Model: ST350032 0AS (scsi) Disk /dev/sdd: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 242GB 500GB 259GB primary ext3 type=83 dumpe2fs Filesystem volume name: extstar Last mounted on: <not available> Filesystem UUID: f0b1d2bc-08b8-4f6e-b1c6-c529024a777d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 15808608 Block count: 63111168 Reserved block count: 0 Free blocks: 2449985 Free inodes: 15799302 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8208 Inode blocks per group: 513 Filesystem created: Mon Feb 15 08:07:01 2010 Last mount time: Fri May 21 19:31:30 2010 Last write time: Fri May 21 19:31:30 2010 Mount count: 5 Maximum mount count: 29 Last checked: Mon May 17 14:52:47 2010 Check interval: 15552000 (6 months) Next check after: Sat Nov 13 14:52:47 2010 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d0363517-c095-4f53-baa7-7428c02fbfc6 Journal backup: inode blocks Journal size: 128M

    Read the article

  • Strange sound behavior

    - by caarlos0
    First of all, sorry for the tittle. English isn't my native language, and I can't find a word to describe the strange behavior I'm getting here. In the most simple explanation, is like sound keeps going down and up again... Think in a kid with that old radios that have a circled volume button. Think like these kid keeps "turning" the volume button. That is the behavior I'm getting here. At first, I believed that it was a pulseaudio issue, but, it isn't. I followed the wiki part I think that should be my problem, but it didn't work. After that, as I'm using XFCE, I didn't really need pulseaudio, so I removed it and stays with a clean alsa, hopping that will fix my problem. Sweet mistake. It really looks like a kid looking for trouble. I believed it worked, and, suddenly, here is the same issue again. BTW: I have a full-upgraded testing system (yeah, I upgraded to testing hopping for new pulseaudio version which fix the issue), no pulseaudio at all, just xfce, started with startxfce. What can I do to fix this? It's extremely annoying... sometimes I just want to throw my laptop in the wall because of this. Any extra info you need, please, tell me. Thanks in advance -- EDIT: My alsamixer is like this: And here is a video with the sound behavior.

    Read the article

  • OS X won't boot up unless I hold down option key

    - by Gazzer
    I have a strange issue on an early 2008 Mac Pro running OS 10.6: if I restart the computer it restarts normally if I shutdown and boot, it stops at the grey screen just before the boot process if I shutdown and boot but hold down the option key, I can select the boot disk and all is good. I've just cloned the disk, and the same thing happens. The disk is a SAMSUNG HD154UI The disk is partitioned (the second partition holds a clone of the Snow Leopard Install disk) One weird thing on the original disk was one of the partitions said 'EFI Boot' in a non-aliased font rather than the name of the disk when the disks are listed upon holding down option. Solution: it seems that there was a problem with the disk. Part of the difficulty in finding the solution was that you need to remove the disk from the computer completely. For example, a good disk in Bay 3, wouldn't boot up if the bad disk was in Bay 2. So for ages I thought the problem was hardware related in Bay 3. So if you think you have a dodgy disk remove it totally if you are testing the hardware with a 'clean' disk. Cleaning the PRAM helped to get the new disk to work too.

    Read the article

  • Play framework 2.2 using Upstart 1.5 (Ubuntu 12.04)

    - by Leon Radley
    I'm trying to get Play 2.2 working with upstart. I've been running Play 2.x with upstart since it's release and it's never been a problem. But since the release of 2.2 and the change to http://www.scala-sbt.org/sbt-native-packager/ play doesn't want to start any more. Here's the config I'm using description "PlayFramework 2.2" version "2.2" env APP=myapp env USER=myuser env GROUP=www-data env HOME=/home/myuser/app env PORT=9000 env ADDRESS=127.0.0.1 env CONFIG=production.conf env JAVAOPTS="-J-Xms128M -J-Xmx512m -J-server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 expect daemon # If you want the upstart script to build play with sbt pre-start script chdir $HOME sbt clean compile stage -mem $SBTMEM end script exec start-stop-daemon --pidfile ${HOME}/RUNNING_PID --chuid $USER:$GROUP --exec ${HOME}/bin/${APP} --background --start -- -Dconfig.resource=$CONFIG -Dhttp.address=$ADDRESS -Dhttp.port=$PORT $JAVAOPTS I've changed the JAVAOPTS to include the -J- and I've also changed the path to use the new startscript located in the /bin/ dir. I've read that upstart 1.4 has setuid and setguid. I've tried removing the start-stop-daemon but I haven't got that working either. Any suggestions would be appreciated.

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • yum fails installing php53-devel.x86_64

    - by coding_hero
    I need to recompile php on a Fedora server because I need to use the --enable-zip flag. When trying to install the devel package, I get the following message. This is after a 'yum clean all': yum install php53-devel.x86_64 Loaded plugins: rhnplugin, security rhel-x86_64-server-5 | 1.4 kB 00:00 rhel-x86_64-server-5/primary | 4.9 MB 00:00 rhel-x86_64-server-5 14161/14161 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php53-devel.x86_64 0:5.3.3-13.el5_8 set to be updated --> Processing Dependency: php53 = 5.3.3-13.el5_8 for package: php53-devel --> Finished Dependency Resolution php53-devel-5.3.3-13.el5_8.x86_64 from rhel-x86_64-server-5 has depsolving problems --> Missing Dependency: php53 = 5.3.3-13.el5_8 is needed by package php53-devel-5.3.3-13.el5_8.x86_64 (rhel-x86_64-server-5) Error: Missing Dependency: php53 = 5.3.3-13.el5_8 is needed by package php53-devel-5.3.3-13.el5_8.x86_64 (rhel-x86_64-server-5) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest Output of 'yum repolist': # yum repolist Loaded plugins: rhnplugin, security repo id repo name status rhel-x86_64-server-5 Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) enabled: 14,161 repolist: 14,161

    Read the article

  • How to migrate Outlook Express mail rules?

    - by ronwest
    I have a home computer that only had a 15Gb C: drive, and ran out of space with all the Microsoft Updates, etc, that keep coming down. So I fitted a 160Gb drive as a C: drive and altered the drive jumpers to make the old C: drive into a slave D: drive, to save migrating documents, etc. I've installed a clean copy of Windows XP SP3 and reassigned the new Outlook Express' mailstore path to point to the old mailstore folder that now has a D: drive letter - and it all works OK. However, my extensive list of mail rules have not been transferred to the new OE and I have not been able to identify how they are stored. To find it I added a new rule to the new OE, exited OE, then searched on the whole computer (including hidden/system files) for files altered around the time I added the rule. I hoped I could just overwrite a new empty file with an old one. But the only files that seem to be changed are Windows system-level files and some bits and pieces in the Windows\PreFetch sub-folder. None of them can be opened as XP has them locked, and none of them have names that are anything to do with email or rules. Does anyone know of any way of migrating OE rules, or do I have to re-enter them by hand? Thanks!

    Read the article

  • Why does HP Update at remote system trigger RDP printing at local system?

    - by lcbrevard
    This is obscure. When connected with RDP to another system that has HP Update installed on it, either directly running the HP Update or having the notification pop up to ask if you want to run HP Update causes the local system to try to print something to peculiarly-chosen-local-printer. Case 1: Desktop Win 7 Ult system RDP connected to HP Laptop Win 7 Ult system. When HP Update runs on the laptop a dialog for XPS Writer Save As... appears on Desktop system. Even if you put in a name, nothing gets generated and the dialog repeats. And repeats. Until you (a) close the RDP connection and (b) clean out the queued entries. If the HP Update pops up the request to run the update and you are not at the desk when this happens, there can be dozens of queued requests for this bogus printing. NOTE: the XPS Writer is not selected as a default printer on either system. Case 2: (Different) HP Laptop Win 7 Ult system RDP connected to XP Pro "brand X" desktop system but with HP printer drivers installed. If the request to run HP Update notification pops on the XP system, dozens of attempts to print, in this case to a Versa Check Printer driver, are queued. Dismissing the HP request, closing RDP, and cleaning out the queue are required to stop this. NOTE: the Versa Check Writer is not selected as a default printer on either system. THE QUESTION: What the heck is going on here? Some kind of scripting or COM activity that is misdirected?

    Read the article

  • Host couldn't be reached by domain name, only by IP: Apache's fault?

    - by MaxArt
    I have this Windows Server 2003 R2 32 bit machine running Apache 2.4.2 with OpenSSL 1.0.1c and PHP 5.4.5 via mod_fcgid 2.3.7. This config worked just fine for some hours, but then the site couldn't be reached with its domain name, say www.example.com, but it could be still reached by its IP address. In particular, while https://www.example.com/ yielded a connection error, http://123.1.2.3/ worked just fine. Yes, first https then http. Error and access logs were clean, i.e. they showed no signs of problems. Just the usual messages, that were interrupted while the site couldn't be reached. After some investigation, a simple restart of Apache solved the problem. Unfortunately, I didn't have the chance to test if https://123.1.2.3/ worked as well, or if http://www.example.com/ was still redirected to https as usual. So, has anyone have any idea of what happened? Before I get tired of Apache and ditch it in favor of Nginx? Edit: Some log informations. The last line of sslerror.log is from 90 minutes before the problem occurred, so I guess it's not important. ssl_request.log shows nothing interesting, too: these are the last two lines before the problem: [28/Aug/2012:17:47:54 +0200] x.x.x.x TLSv1.1 ECDHE-RSA-AES256-SHA "GET /login HTTP/1.1" 1183 [28/Aug/2012:17:47:45 +0200] y.y.y.y TLSv1 ECDHE-RSA-AES256-SHA "POST /upf HTTP/1.1" 73 The previous lines are all the same and don't seem interesting, except 4 lines like these 30-40 seconds before the problem: [28/Aug/2012:17:47:14 +0200] z.z.z.z TLSv1 ECDHE-RSA-AES256-SHA "-" - These are the corrisponding lines from sslaccess.log: z.z.z.z - - [28/Aug/2012:17:47:14 +0200] "-" 408 - ... x.x.x.x - - [28/Aug/2012:17:47:54 +0200] "GET /login HTTP/1.1" 200 1183 y.y.y.y - - [28/Aug/2012:17:47:45 +0200] "POST /upf HTTP/1.1" 200 73

    Read the article

  • RAID degraded on Ubuntu server

    - by reano
    We're having a very weird issue at work. Our Ubuntu server has 6 drives, set up with RAID1 as follows: /dev/md0, consisting of: /dev/sda1 /dev/sdb1 /dev/md1, consisting of: /dev/sda2 /dev/sdb2 /dev/md2, consisting of: /dev/sda3 /dev/sdb3 /dev/md3, consisting of: /dev/sdc1 /dev/sdd1 /dev/md4, consisting of: /dev/sde1 /dev/sdf1 As you can see, md0, md1 and md2 all use the same 2 drives (split into 3 partitions). I also have to note that this is done via ubuntu software raid, not hardware raid. Today, the /md0 RAID1 array shows as degraded - it is missing the /dev/sdb1 drive. But since /dev/sdb1 is only a partition (and /dev/sdb2 and /dev/sdb3 are working fine), it's obviously not the drive that's gone AWOL, it seems the partition itself is missing. How is that even possible? And what could we do to fix it? My output of cat /proc/mdstat: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda2[0] sdb2[1] 24006528 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 1441268544 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[0] 1464710976 blocks super 1.2 [2/1] [U_] md3 : active raid1 sdd1[1] sdc1[0] 2930133824 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdf2[1] sde2[0] 2929939264 blocks super 1.2 [2/2] [UU] unused devices: <none> FYI: I tried the following: mdadm /dev/md0 --add /dev/sdb1 But got this error: mdadm: add new device failed for /dev/sdb1 as 2: Invalid argument Output of mdadm --detail /dev/md0 is: /dev/md0: Version : 1.2 Creation Time : Sat Dec 29 17:09:45 2012 Raid Level : raid1 Array Size : 1464710976 (1396.86 GiB 1499.86 GB) Used Dev Size : 1464710976 (1396.86 GiB 1499.86 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 15:55:07 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : lia:0 (local to host lia) UUID : eb302d19:ff70c7bf:401d63af:ed042d59 Events : 26216 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed

    Read the article

  • SFTP, SCP, Secure Webdav: which is the most suitable ?

    - by Xavier Maillard
    Hi, currently, I am hosting a webdav share setup in order to store files I need anywhere I am. It is available via HTTPS. Things are that I do not need all the HTTP machinery -i.e. my nginx http server is only there for this webdav folder. I am not sure I made the best choice. My requirements on the client side are: secured transfers mountable as a network drive at work with 'near realtime sync' usable for any OS I could use (including my mobile (android)) At first, I chose webdav since it would pass through my work proxy (which refuses all that is not on HTTP/S (port 80 or 443)). Today, I am not satisfied with the setup and even if nginx memory footprint is pretty small, its webdav support is not really "clean" and full. What would you recommend between SFTP, SCP and the current webdav solution ? I think SFTP is the closest solution but I still have to find out how to pass through my proxy ;) SCP seems quite limited as I read about it (only file transfers if I read right). Cheers

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • Why isn't Apache Basic authentication working?

    - by Brad
    I just upgraded Apache from it's 2003 build, to a squeaky-clean, brand-new 2.4.1 build. All seems pretty good except for one glaring thing: In my httpd.conf file I have the following: <Directory /> AllowOverride none Options FollowSymLinks AuthType Basic AuthName "Enter Password" AuthUserFile /var/www/.htpasswd Require valid-user </Directory> This should allow only users in the specified auth file to access the server - just as it had under the older version of Apache. (Right?) However, it's not working. Requests are granted with no authentication provided. When I switch logging to LogLevel Debug, for the accesses, it says: [Sat Mar 24 21:32:00.585139 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of Require all granted: granted [Sat Mar 24 21:32:00.585446 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of <RequireAny>: granted I really don't know what this means - and I (to the best of my knowledge) don't have any "Require all granted" or "" statements in any of my files. Any ideas why this isn't working, or where to debug??

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >