Search Results

Search found 61241 results on 2450 pages for 'empty set'.

Page 628/2450 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • Howto boot directly into a VirtualBox image?

    - by mawimawi
    I have a running setup as following: Native OS: Windows 7 64bit, 3 Partitions: c: (System) d: (FAT32, here is my vdi file) e: (unformatted) VirtualBox: Fedora 14 running off the vdi file on drive d. Usually this setup is great for me, but sometimes I'd like to run Linux natively, and not inside VirtualBox. Is there a way to boot directly into the vdi file without the Windows overhead? E.g. using a USB stick with some modified Linux Kernel / GRUB that can mount the vdi file directly as "/"? Or copy the contents of my vdi file to the empty partition and somehow use this from VirtualBox (when booting into Windows) AND directly booting into Linux? Hope to get some hints or even howtos.

    Read the article

  • Howto boot directly into a VirtualBox image?

    - by mawimawi
    I have a running setup as following: Native OS: Windows 7 64bit, 3 Partitions: c: (System) d: (FAT32, here is my vdi file) e: (unformatted) VirtualBox: Fedora 14 running off the vdi file on drive d. Usually this setup is great for me, but sometimes I'd like to run Linux natively, and not inside VirtualBox. Is there a way to boot directly into the vdi file without the Windows overhead? E.g. using a USB stick with some modified Linux Kernel / GRUB that can mount the vdi file directly as "/"? Or copy the contents of my vdi file to the empty partition and somehow use this from VirtualBox (when booting into Windows) AND directly booting into Linux? Hope to get some hints or even howtos. EDIT: yes, sorry. not programming related. I posted the question to serverfault.com (hopefully that's the better site for my question.)

    Read the article

  • dhcrelay running as both DHCP and DHCPv6 relay agent on CentOS 6.2

    - by Tibor
    I am trying to set up a DHCP relay agent that would relay DHCP requests for both IPv4 and IPv6. I am using CentOS 6.2 and I am using the dhcrelay from the ISC DHCP implementation. I would like to set it up as a service, but the man page for dhcrelay states: -6 Run dhcrelay as a DHCPv6 relay agent. Incompatible with the -4 option. -4 Run dhcrelay as a DHCPv4/BOOTP relay agent. This is the default mode of operation, so the argu- ment is not necessary, but may be specified for clarity. Incompatible with -6. It seems that the -6 and -4 options are incompatible. How would I still make it work for both protocols without rolling my own service wrapper for both cases?

    Read the article

  • ODSI + weblogic = JDBC problem

    - by Giuseppe Di Federico
    I'm currently developing a web service using ODSI through Oracle Workshop for WebLogic (ex AquaLogic). I created a datasource on weblogic using the driver "Oracle thin driver 10g", the test succeed on WebLogic. (My Database is Oracle 10 10.2.0.1.0) The problem occours when I try to create the Phisical Data Service in the Oracle Workshop. I choose the following options: Data source type = Relational Data source = [THE CORRECT NAME OF THE SOURCE SET ON WEBLOGIC] Database type = ??? Aqualogic, doesn't allow me to select the database type. I guess is a problem related to the driver set on weblogic... but I ain't sure.Does someone know the nature of my problem ? Tnx

    Read the article

  • Good Documentation on Avaya IP Office 500 r2 setup

    - by Cliff Racer
    I have set up a couple of Avaya IP Office systems over the course of my current job. I have a pretty good handle on the process, but now I am faced with something I have not done before. Both the IP office systems I have set up used all Digital phones. The new system we are putting in place will actually use IP phones for the first time. After tyring to track down some general documentation on my own, I was not able to find anything that left me feeling comfortable about setting up IP phones on an Avaya IP Office 500. Does anyone know of any good how-To's for setting up IP phones on IP Office? I get the impression its pretty simple but learend enough about Avaya to know that there are some tricky aspects to setting them up

    Read the article

  • What is the EGG environment variable?

    - by Randall
    A user on our (openSuSE) linux systems attempted to run sudo, and triggered an alert. He has the environment variable EGG set - EGG=UH211åH1ÒH»ÿ/bin/shHÁSH211çH1ÀPWH211æ°;^O^Ej^A_j<X^O^EÉÃÿ This looks unusual to say the least. Is EGG a legitimate environment variable? (I've found some references to PYTHON_EGG_CACHE - could be related? But that environment variable isn't set for this user). If it's legit, then I imagine this group has the best chance of recognizing it. Or, given the embedded /bin/sh in the string above, does anyone recognize this as an exploit fingerprint? It wouldn't be the first time we had a cracked account (sigh).

    Read the article

  • vhost.conf with plesk makes infinite loop

    - by user134598
    So I'm trying to make rewrite rules for my just migrated site and now we're using PLESK (unfortunately in my opinion). So, in order to make those rewrites I'm using the vhost.conf file in mydomain/conf folderm and I execute: /usr/local/psa/admin/sbin/websrvmng -u --vhost-name=mydomain.org so that includes my file into the httpd configuration. However, no matter what I write in my vhost.conf file, it will make my site go in an infinite loop whenever I try to load an URL that's not just the domain. Example: mydomain.org Works just fine. mydomain.org/event/nameofevent Will try endlessly to load and eventually my browser will detect that infinite loop. I though I was writing something incorrectly in my vhost.conf file but I even tried it with the file empty (not a single line). It will still try to load endlessly. Anybody can hint me if I'm skipping a step before (like any activation that should be done beorehand or something). Thanks in advance.

    Read the article

  • Way to update / refresh / reschedule project plan after adding a vacation / changing calendar

    - by CodeCanvas
    I had created a project plan using MS-Project 2010 (not server). I had set the task schedule mode to "Auto Scheduled" and entered the necessary tasks. Since it is a single person project I also added one person to the Resource of the file and assigned that resource to all the tasks and leveled the project. After the plan was put in and tasks were leveled I figured the calendar was not correctly set (in UAE the weekend is Friday and Saturday instead of Saturday and Sunday). So, I updated the default calendar (Standard) of the project by going to Project-Change Working Time-Work Weeks and changed them as needed. But after doing this, the tasks are still scheduled over Friday and Saturday even though I have marked them as nonworking days in the standard calendar. I tried the following for the tasks to refresh, but I was not successful: Updated all tasks to use the "Standard" calendar in the project Selected option so that tasks do not ignore resource calendars Added a constraint "As Soon As Possible" Executed "Level All" Any help on solving this issue would be much appreciated, thanks in advance.

    Read the article

  • How do I open WPS files in Word Starter 2010?

    - by Sean
    Ok, this is driving me crazy. My parents have 100s of old WPS documents from an ancient version of MS Works, and they just bought a new computer with MS Word 2010 Starter on it. I ap trying to set it up so that the default program to open the WPS files is MS Word, but there is no EXE anywhere in program files or programfilesx86. I opened up process explorer and tried to figure out where the executable for Word is, and it turns out it is on the Q drive... the same Q drive that seems to be inaccessible no matter what I try. I tried adding the exact address of Word, but if I try and set that on anything, it says that it cannot find the file. This is driving me insane, is there any way to make it real easy to open these WPS files in Word?!?

    Read the article

  • Color Calibrate Dual Monitor XP SP2

    - by Laramie
    This topic has been touched on before but not really answered. I have a dual monitor system and the colors differ wildly. I currently live Buenos Aires where color correction hardware costs premium prices. I do some graphic design, but don't require a pro-level calibration. That said, I'd like my monitors to be set as close to "true color" as possible. I've located the useful and free Monitor Calibration Wizard, but it seems to adjust the entire system internally at startup. I could use the Microsoft Color Control Panel Applet to set a different ICC or ICM profile for each monitor, but the Monitor Calibration Wizard outputs its own format for profiles.

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • Is it possible to write C# code as below and send email using my home network?

    - by kedar karthik
    Is it possible to write C# code as below and send email using my home network? I have a valid user name and password on that exchange server. Is there any configuration that I can set to achieve this? BTW this code blow works when I run it within office network. I want this code to work when run from any network. String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • How to get Word 2003 to make my print layout go from left to right?

    - by Shaul
    My copy of MS Word 2003 was installed on my computer with the locale set to Israel, so among other things my Normal.dot template was set up for right-to-left. I managed to fix most of the Hebrew support things so that I am working in English by default now. The only thing I haven't found a cure for is how to make the "print layout" view also go from left to right; as things are, the page flow always appears from right to left, even in English documents - IOW, page 1 appears on the right of page 2, as shown below. I can't see any obvious option to change this. How do I do it?

    Read the article

  • Tomcat memory usage grows until crash with no GC run

    - by Phil
    I'm administrating a server running Tomcat that is getting a lot of traffic lately. If I monitor memory usage in Task Manager I can see the memory usage growing and eventually tomcat crashes around the 1GB mark. Here's the memory relevent bits I've set in Tomcat Properties (this is a Windows Server): Intial memory pool: 1024 MB Maximum memory pool: 1024 MB -XX:MaxPermSize=256M The weird thing is since these problems arose I've deployed Lambda Probe to the Tomcat instance and the memory usage values I see there are much lower, for example Task Manager might show 467MB used while the "Total" used in Probe is 212 MB. Also, the Maximum Total listed in Probe is 1.29GB, when I would have expected 1GB, the maximum memory set above. If I force the garbage collector to run using Probe, I can keep Tomcat from crashing for a while (indefinitely, AFAIK). So why doesn't the GC run automatically and stop Tomcat from crashing? Thanks.

    Read the article

  • When clicking an irc:// link, a new instance of chatzilla opens instead of the existing one being used.

    - by WebDevHobo
    That is my problem in a nutshell. I'm running Win7 32-bit. I have chatzilla on XulRunner, so not as the Firefox add-on. When I clock any irc:// link, a new instance of Chatzilla will be started. I have a lot of startup-commands set, so all those will be executed. I stop the new instance before it takes off, but this is rather annoying. Firefox application setting just link to the path where the executable is, with no option to set any command-line stuff to make the existing instance be used. Is there any firefox or windows setting that I can manipulate, so that when firefox calls chatzilla.exe, the existing instance is used instead of a new one opened?

    Read the article

  • Nagios3 check_httpname gives 503 response; from command line I get a 200 response

    - by Michael T. Smith
    We're using Nagios to monitor our site (and a bunch of other stuff.) For some odd reason, when I test out the command /usr/lib/nagios/plugins/check_http -H 'domainname.com' the response that comes back is HTTP/1.1 200 OK but when I set up the service to do it: # Check that domain is running define service { hostgroup_name hostgroup service_description host site check_command check_httpname!domainname.com use generic-service notification_interval 1; set > 0 if you want to be renotified } the response that comes back is HTTP/1.1 503 Service Unavailable. Does anyone know why this would be happening?

    Read the article

  • How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?

    - by deadprogrammer
    This is a bit of an embracing question, but I have to admit that this late in my career I still have questions about the mv command. I frequently have this problem: I need to move all files recursively up one level. Let's say I have folder foo, and a folder bar inside it. Bar has a mess of files and folders, including dot files and folders. How do I move everything in bar to the foo level? If foo is empty, I simply move bar one level above, delete foo and rename bar into foo. Part of the problem is that I can't figure out what mv's wildcard for "everything including dots" is. A part of this question is this - is there an in-depth discussion of the wildcards that cp and mv commands use somewhere (googling this only brings very basic tutorials).

    Read the article

  • ASP.NET website http requests appear to be queueing

    - by scolemann
    We cloned our servers this weekend into a colo. All non-asp.net sites are performing great, but ASP.NET sites are very slow. It appears to be an issue with the requests/connections, but I cannot figure out where. The reason I think it is a problem with the connections is that when I launch fiddler and watch the requests, all requests appear to happen sequentially. Even the static image requests are taking 5 seconds and another one doesn't start until the first one finishes. MaxConnections is set to 100 in machine.config and the "website connections" are set to unlimited. Any idea what else coudld be causing this? from machine.config:

    Read the article

  • How to add timestamp to the logfilename with the apache log4j

    - by swati
    I am new to using apache logger . I have downloaded the log4j-xx and i have the following text configuration file # Set root logger level to DEBUG and its only appender to mainFormat. log4j.rootLogger = TRACE, mainFormat, FILE # mainFormat is set to be a ConsoleAppender. log4j.appender.mainFormat=org.apache.log4j.ConsoleAppender # mainFormat uses PatternLayout. log4j.appender.mainFormat.layout=org.apache.log4j.PatternLayout log4j.appender.mainFormat.layout.ConversionPattern=%d [%t] %-5p %c - %m%n #File makes a file of the output. log4j.appender.FILE=org.apache.log4j.FileAppender log4j.appender.FILE.File=log4j_HAPR001_OutputFile.log log4j.appender.FILE.layout=org.apache.log4j.PatternLayout log4j.appender.FILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n i use the above config file to create the log file. Now i wanted to add the current time stamp to the log file. Is there any way to do this. If yes can some one please give me the instructions how to do. Thanks in advance. Regards, Swati

    Read the article

  • Two Ubuntu Instances (Guest) networking with XP (Host) in virtual box v3.1.4

    - by EnthuCrazy
    So here is my current objective: I need to create two guest instances of Ubuntu Desktop 9.10 in VirtualBox on a WindowsXP host. (This is needed for communications later on.) (this step is almost done) I need to establish networking between all three OS's, the host and two guests (Guest1 - Host - Guest2). I know that generally, to establish networking between Windows host and Ubuntu guest, we set up a bridge connection. But here there are two guests, and primarily I need networking between the two guests (Ubuntu to Ubuntu). So will there need to be a change in tap0 and tap1 interfaces when we set up a bridge, or is there a better way to implement this? Please explain the procedure.

    Read the article

  • traffic shaping for certain (local) users

    - by JMW
    Hello, i'm using ubuntu 10.10 i've a local backup user called "backup". :) i would like to give this user just a bandwidth of 1Mbit. No matter which software wants to connect to the network. this solution doesn't work: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 iptables -t mangle -A POSTROUTING -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 2 htb default 1 tc filter add dev eth0 parent 2: protocol ip pref 2 handle 50 fw classid 2:6 tc class add dev eth0 parent 2: classid 2:6 htb rate 10Kbit ceil 1Mbit tc qdisc show dev eth0 tc class show dev eth0 tc filter show dev eth0 does anyone know how to do it? thanks a lot in advance

    Read the article

  • bind tmux prefix to OS X cmd key (or any other binding)

    - by rubenfonseca
    Hi all. I'm used to iTerm2 (or Terminal.app for this case) on OS X. But I want to move to use tmux (or screen, but the problem is similar to both apps). So my idea is to have a single iTerm tab with a tmux session opened with multiple tabs. To do the transition I have a basic feature I need to configure on tmux: switch the the tab 'n' by using cmd + n (like Firefox, Chrome, iTerm2 itself, etc) However I can't find a way of mapping the cmd key on the mac keyboard. I first tried to implement cmd as a prefix key, with no success. I've tried setting set-option -g prefix M-a (hoping for Meta-a) set-option -g prefix ^a (hoping for ^ to work) but nothing works. Is this possible? I don't really need to bind the prefix to cmd, but I want to be able to change tmux tabs with cmd+n. Thank you

    Read the article

  • .htaccess template, suggestions needed

    - by purpler
    DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >