Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 370/2164 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Maxtor 500GB external hard drive not being detected but power is going to it?

    - by ClarkeyBoy
    I have 2 * Maxtor Onetouch 4 Lite 500GB external hard drives (part no. 9NT2A4-500). They both used to work fine on my old laptop (an Acer) but I have not used them for about a year, since my laptop was stolen and I got this one (also an Acer [Aspire 7738G]). I have one plugged into the mains with one of the leads I believe was supplied with them. It appears to be receiving power as it is warm and the power light (on the unit itself) is on; also the mains adapter is fairly warm. I also have it plugged into my laptop with a USB lead which I have tested on my mp3 player (so I know it works). However my hard drive is not showing on my computer. I have tried checking for new hardware, installing the software that was supplied with it, checking drive letters in case it is registered as C: or something stupid, checking for problems etc... I can't find any cause for it to do this. It does appear to be starting up and, possibly, shutting down and restarting constantly (that's what it sounds like altho I can't be certain). I have had both hard drives stored in different places for the last year and they're both doing the same thing.. if it was only one then I'd guess it had got damaged or corrupted or something but since it is both I doubt this is it. The only things in common with both of them are the leads and the laptop, however I know the USB lead works and guess the mains lead works as there is power going to the unit. Has anyone come across this before or does anyone have any idea what the cause / solution to the problem is? Any help would be greatly appreciated. Regards, Richard

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • Flushing iptables broke my pipe, how can I save my instance?

    - by Niels
    I was setting up my iptables when I performed a iptables -F and my ssh pipe broke. This is the last output of my session: root@alfapaints:~# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp dpt:2222 ACCEPT tcp -- li465-68.members.linode.com anywhere state NEW,ESTABLISHED tcp dpt:nrpe ACCEPT tcp -- anywhere anywhere tcp dpt:9200 state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHED ACCEPT udp -- anywhere anywhere udp spt:domain Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:2222 ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:nrpe ACCEPT tcp -- anywhere anywhere tcp spt:9200 state ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHED ACCEPT udp -- anywhere anywhere udp dpt:domain root@alfapaints:~# iptables -F Write failed: Broken pipe I tested my connection just before and I was able to connect with ssh. Now I did a nmap scan and not a single port is open anymore. I know my VPS is running on VMWare ESXi, could a reboot help? Or if not could I attach and mount the disk to another vm to save the data? Does anybody have some advise? And maybe an explanation what happend or what could have cause my pipe to break? ps: I didn't save my rules on the config directories of iptables. But used a file I stored in ~/rules.config to apply my rules like this: iptables-restore < rules.config So probably a reboot would help? Thanks a lot in advance.

    Read the article

  • What GPT partition type to use for protecting DRBD metadata?

    - by Carsten Scholtes
    I'm planning to install a DRBD device on a (replicated) disk with two GPT partitions. DRBD requires some space for (preferentially "internal") metadata at the end of the underlying device. I'm hesitant to leave this space unpartitionend (or unformatted in a normal partition). I'd like to reserve an extra partition at the end of the underlying disk device for the metadata. (If I understand correctly, DRBD would not care about the partition or its type and could then use that space exclusively.) My question is: Which would be a suitable GPT partition type for such a metadata partition? It should not be a type interpreted while booting (such as EF00 EFI System). It should not be a type prone to be modified accidentialy by the booted OS (such as 8200 Linux swap, 8e00 Linux LVM, fd00 Linux raid). (The booted OS will be Ubuntu Linux 12.04.3.) It should not be a type indicating a normal filesystem (such as 0c01 or 8301), prone to be formatted correspondingly. It should not be a type requiring any special content in the partition (since the content is to be handled exclusively by DRBD). It should express the purpose of being reserved for something special (namely DRBD). (The types I listed are as provided by gdisk. I'm thinking about using some type unlikely to be used by the OS (maybe bf0a Solaris Reserved 4) or an invented(?) type such as fd01 (close to fd00 Linux raid…). Would something like this be suitable, too dangerous or even possible?)

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • who has files open on a linux server

    - by Robert
    I have the fairly common task of finding who has files open on our Linux (Ubuntu ) file server in our Windows environment. We use Samba on the network and I use Putty from my workstation to establish a shell window to run bash scripts. I have been using something like this to find what files are open: (this returns a list of process ids with each open file) Robert:$ sudo lsof | grep "/srv/office/some/folder" Then, I follow up with something like this to show who owns the process: (this returns the name of the machine on the network using the IP4 protocol who owns the process) Robert:$ sudo lsof -p 27295 | grep "IPv4" Now I know the windows client who has a file open and can take action from there. As you can tell this is not difficult but time consuming. I would prefer to have a windows application I can run that would just give me what I want. So, I have been thinking about creating some process I can run on Linux that listens on a port and then returns a clean list of all open files with the IP address of the host who has the file open. Then, a small windows client application that can send the request on the port. It seems like this should be a very common need but I can not find anything like this that has been done before. Any suggestions?

    Read the article

  • SPDIF passthrough not working in Windows 7

    - by adriangrigore
    Hi, I'm running Windows 7 on a computer with an Audigy Platinum eX sound card connected to a surround receiver via optical cabling. Sound works fine when listening to non-surround audio sources, such as windows sounds or MP3. However, when I view a DVD in Media Center and the SPDIF passthrough kicks in, I can only hear an awful noise instead of the movie soundtrack. Also, the receiver does not show the Dolby Digital or DTS symbol, but stays at Dolby Prologic, so it seems it doesn't identify the sound encoding properly. I could switch off SPDIF passthrough and use the sound card's decoder instead, but that's not an option for me since it would create more problems with regular MP3 playback via additional Stereo Receiver which is also connected to the same sound card. I've tried both the default Audigy drivers that come with Windows 7 and the latest drivers from the Soundblaster website, but the problem remains unchanged. Also, I have ensured that the receiver's Dolby Digital decoder is not broken by successfully connecting it to my PS3 to view a Dolby Digital DVD. Besides, SPDIF passthrough was working fine in Vista before I upgraded to Windows 7. Is there anything else I could try?

    Read the article

  • Tracking Security Vulnerability remediation

    - by Zypher
    I've been looking into this for a little while, but havn't really found anything suitable. What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT" What I am looking for is something pretty simple that allows the following: batch entry of new vulnerabilities that need to be remediated Per user assignment AD/LDAP Authentiation Simple interface to track progress - research, change control status, remediated, etc. Historical search ability Ability to divide by division Ability to store proof of resolution for the Security Team to access Dependency tracking Linux based is best (that's my group :) ) Free is good, but cost doesn't matter so much if the system is worth it The systems doesn't have to have all of these features, but if it did that would be great. yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group. Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for. Server Faults input is greatly appreciated as always!

    Read the article

  • XP Client for NFS failure dialog on startup, but drive mapping works

    - by Matt Bennett
    I'm mounting an NFS share to some windows machines using the tools that come in the Services for UNIX Administration toolkit. I've set up the User Name Mapping service to use local passwd and group files. I had to manually start the User Name Mapping service, and then created an 'advanced map' from the XP machine's user to a uid that exists in on my NFS server, like so: Windows User: Matt Bennett UNIX Domain: PCNFS UNIX User: mattbennett UID: 10250 Primary: * I can map a network drive without any issues, and it correctly identifies the UID and GID to use, but when I reboot I get this message: "An error occurred while connecting to the NFS server. Make sure that the Client for NFS service has started. If the problem persists make sure Client for NFS service can communicate with User Name Mapping or PCNFS server." After dismissing the dialog, the machine finishes booting and the network drive is there in My Computer with the title "Disconnected Network Drive", but I can open it I can see the network share without a problem, and then it drops the 'disconnected' from its title. It seems like the services are starting in the wrong order or something, so the first attempt to connect fails but subsequent ones work as expected. There don't seem to be any symptoms apart from the dialog box, but obviously something's not quite right. What have I done wrong? Thanks, Matt.

    Read the article

  • List symlinks in specific relative directories

    - by Clinton Blackmore
    I have a server that shares out user home folders over the network. Each user has a Cache folder. Sometimes a symlink is used to redirect this folder to the hard drive of whichever machine they are using (and sometimes that doesn't work and they have a broken symlink [which is a matter for another day].) I'm trying to find out which users have symlinks and which don't. Within the shared folder, to get to the Cache folder you would substitute folders like so: $GRADE/$USERNAME/Library/Caches Right now I'm searching to see which users have symlinks and which do not. I've come up with: cd /path/to/shared/home/folders sudo find . -name "Caches" -exec ls -ld {} \; and get results like this: lrwxr-xr-x@ 1 name0 ES_Students 27 Jan 18 11:05 ./CES_Grade_03/name0/Library/Caches -> /tmp/name0/Library/Caches drwx------ 11 name1 ES_Students 374 Dec 8 15:44 ./CES_Grade_03/name1/Library/Caches lrwxr-xr-x@ 1 name2 ES_Students 27 Feb 23 14:27 ./CES_Grade_03/name2/Library/Caches -> /tmp/name2/Library/Caches drwx------ 17 name3 ES_Students 578 Jan 25 11:13 ./CES_Grade_03/name3/Library/Caches drwx------ 12 name4 ES_Students 408 Mar 22 13:09 ./CES_Grade_03/name4/Library/Caches but it nags at me that there must be a better way. Yes, it is good enough, and a one-off task, but I want to know how to do it right! Surely, I should be able to do something like: cd /path/to/shared/home/folders sudo ls -ld **/**/Library/Caches I'm afraid that I don't know the proper syntax or if there is a recursive folder-replacing wildcard format in bash, and my google-fu failed me. So, how do I properly formulate the search?

    Read the article

  • Can one really fry a monitor by setting the wrong HorizSync and VertRefresh?

    - by rumtscho
    I've encountered this problem on several different systems with several different monitors: a monitor functions perfectly under Windows. I install a Linux and the max resolution is at some impossibly low value, mostly 640x480, changing it in Xorg.conf doesn't work. The X.org log file then shows that the driver cannot determine the correct refresh rate for the monitor, so it ignores everything in Xorg.conf and just loads in some default minimalistic mode. Googling the problem leads to an easy solution: set the HorizSync and VertRefresh in Xorg.conf, and everything works. The problem seems to be a common one, and I've seen dozens of results recommending the solution. Each of them contains the warning that you should use the value ranges provided with the monitor. Because if you don't, and your video card sends a signal with the wrong refresh rate, this can damage your monitor. Of course, you don't have a user manual for your monitor any more. If you are lucky to find one on the attic or on the net, it doesn't contain any information about the supported refresh rate. So you just type in the value suggested in the solution description, which varies wildly depending on your source, and cross your fingers. You restart, and... ... you've set the wrong values. So the monitor shows a short message like "input signal out of range", and you do a hard restart, repair your Xorg.conf in recovery mode, and everything is fine, including your monitor. So does this warning reflect a real possibility, or is it just a geeky urban legend? Or is it something which used to happen in the past, before manufacturers started protecting the monitors against it? Is it technically possible with every monitor technology, or is it maybe something which can only happen to a CRT? If you think that it's true, why? Have you ever witnessed a monitor die from the wrong refresh config, or have you read of it in a reputable source?

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Possible DNS issue after a reinstall of Windows Server 2000 (get off my lawn)

    - by cop1152
    I just replaced a drive on a Win2000 Server that replicates AD and issues out DHCP at one of our offices. I successfully joined it to the domain, setup range of IP's, etc, but am still having issues. I cannot RDC to it with name or IP. I can ping it, browse to it with Windows Explorer, and remote to it with some other software, but not RDC. The other issue is this: Users are unable to authenticate on it. They receive the message 'username or password incorrect' (or something like that). Changes made on the main domain controller seem to take forever to trickle down. The most significant entry in the DNS Server Log is Event ID 7062: The DNS Server Encountered a Packet Addressed to Itself. At least, I think its significant. The Directory Services Log shows numerous Event IDs 1265: The attempt to establish a replication link with parameters failed with the following status: The DSA operation is unable to proceed because of a DNS lookup failure. Does this make any sense to anyone? I feel like its something very simple that I am overlooking. Thanks in advance.

    Read the article

  • Sync clock on Windows XP machine to external (non-domain, non-workgroup) Windows Server 2008 R2 machine

    - by Eric
    I have two machines and I'd like their clocks to be in sync for various reasons. Machine 1 is an XP machine located in the office. Machine 2 is a VPS hosted by a third party running Windows Server 2008 R2. These machines are not in any kind of workgroup or on a domain together. They are completely separate machines. Machine 2 is currently syncing once a week to time.windows.com. The clock on Machine 2 does seem to wander a bit within that week interval. What I would like to do is have Machine 1 set its clock based on the clock of Machine 2. I have tried configuring w32tm on the XP machine. This is what I used for configuration: w32tm /config /syncfromflags:manual /manualpeerlist:"<ip address of machine 2>" However, whenever I issue the /resync command I get "The computer did not resync because no time data was available". I have made sure to start the windows time service on machine 2, and I have added firewall exceptions for UDP port 123. Is there something I need to configure on Machine 2 (other than just starting the time service) in order to get it to respond? Edit: I have also run w32tm /config /reliable:YES /update on Machine 2. I am still getting "The computer did not resync because no time data was available". Is there something else I'm missing?

    Read the article

  • Xp SP3 - Non-Functioning

    - by Josh
    Ok Here is a crazy problem, I have a HP dv6000 Laptop that can no longer hold a charge, so I hooked it up to my TV, bought a wireless mouse and keyboard and configured xp to run with the lid closed, It has medium to heavy usage mainly just streaming from sites like Netfilx, Hulu, ABC, etc. And playing movies I ripped of dvd. It ran fine for a while but recently it has been having some weird problems: Problem one: I used to use firefox but now when run it I can type but as soon as I click something it just shuts down, completely, I can't even close it unless I use taskmanager to do it. So I went and got google chrome which is better but still hit or miss, but never completely shuts down, I just can't click anything or type anything, or sometimes I just can't type anything or vice versa. Also when I open a new tab, and try to move back to my old one, it automatically closes the old tab when I click on it. Problem two: When using the internet I can't use any other application or anything windows (ie. Windows explorer) until I force quit all browsers with taskmanger. The reason I can't run anything is because I can't click on it. Problem three: When I try to play a movie (with vlc) Once it starts playing I can't click on anything, but I can use hotkeys, and once it stops everything is fine again. Well I hope somebody knows whats going on because I have no clue, If you need clarification or more info on something I would be happy to provide it...

    Read the article

  • EEE PC dropbox server running 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have Samsung SCX 4521F (fax/print/scan/copy), Samsung ML2010, HP Laser jet P1006, HP Color Laserjet CP1215, HP Office jet pro K8600, HP Design jet 500. All of them now are connected using little print servers and I want to get rid of them hooking everything to this mini server. The fax/print/scan/copy machine need to stay connected to a PC to make me able to use the software that comes with It. The mini server would save me on this too. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: If You have something similar running for a long time If You disagree with this hardware choice and If You would suggest some other device. If You see any issues with my printing setup Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • Which server software and configuration to retrieve from multiple POP servers, routing by address to correct user

    - by rolinger
    I am setting up a small email server on a Debian machine, which needs to pick up mail from a variety of POP servers and figure out who to send it to from the address, but I'm not clear what software will do what I need, although it seems like a very simple question! For example, I have 2 users, Alice and Bob. Any email to [email protected] ([email protected] etc) should go to Alice, all other mail to domain.example.com should go to Bob. Any email to [email protected] should go to Bob, and [email protected] should go to Alice Anything to *@bobs.place.com should go to Bob And so on... The idea is to pull together a load of mail addresses that have built up over the years and present them all as a single mailbox for Bob and another one for Alice. I'm expecting something like Postfix + Dovecot + Amavis + Spamassassin + Squirrelmail to fit the bill, but I'm not sure where the above comes in, can Postfix deal with it as a set of defined regular expressions, or is it a job for Amavis, or something else entirely? Do I need fetchmail in this mix, or is its role now included in one of the other components above. I think of it as content-filtering, but everything I read about content-filtering is focussed on detecting spam rather than routing email.

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • Spotlight actually searching every file on "This Mac"

    - by Cawas
    I know of 2 ways to search for any file in your machine using Finder (some say it's Spotlight) and no Terminal. To prevent answers / comments about Terminal, I consider it either for scripting something or as last resource. It's not practical for lots of usages. For instance, if you want to find something to attach to a mail, or embed in iTunes or any other app, you can just drag n' drop one or many of them. Definitely not practical to do under Terminal. There are many cases of use for any, but the focus here is Graphical User Interface. Well, the 2 ways basically are: Press Cmd + Opt + Spacebar and type in your search. Press the + button, select "System files" and "are included". This is so far my preferred way, but I'm not sure it will go through every file. Open Finder, press Cmd + Shift + G and/or select just one folder. Type in your search and select the folder rather than "This Mac". This will bring files not shown in "This Mac" if you select a folder outside of the default scope. Thing is, none of those is really convenient or have the nice presentation from regular Spotlight, which you get from Cmd + Spacebar and just typing. And, as far as I've heard, the default behavior on Spotlight in Tiger was actually being able to find files anywhere. So, is there any way to make the process significantly simpler? Maybe some tweak, configuration or really good Spotlight alternative? I'd rather keep it simple and tweak Spotlight.

    Read the article

  • Debugging a Drobo that chokes Windows 7x64 When Plugged In

    - by Pridkett
    I've had a love hate relationship with my Drobo for a long time. After two years of using it on a Linux box, I moved it over to a Windows 7 machine where it seemed to work just fine for a long time, but it was under very light usage. Mainly backups that never actually happened. Recently I began using it for additional backup services (through CrashPlan, which is great). This means the Drobo gets a lot more usage. Also it means that something interesting happens, the Drobo can choke my system on startup. Here's what I mean: Start computer without Drobo plugged in, CrashPlan and Drobo Dashboard services disabled: 105s Start computer with Drobo plugged in Crashplan disabled, Drobo Dashboard enabled: 250s (and 1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, CrashPlan and Drobo Dashboard disabled: 250s (1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, Crashplan and Drobo Dashboard enabled: 300s (1 cpu at 100% for a very long time, drobo churning) If I yank the USB plug on the Drobo the CPU usage goes down to nothing very quickly. The slow startup in the fourth scenario is because CrashPlan is trying desperately to load stuff up on the H: drive before it gives up, so I've disabled it for the time being. So here's my question: What the heck is going on when I plug the drobo in? I've fired up Process Explorer and see that the System process is hogging the CPU, specifically it's an ntoskrnl.exe/KdPollBreakIn thread that's going ape. Is this something that's wrong with Drobo? Windows? Any idea on how to find out? If it matters, here's tech info: Athlon 64x2 4400, 2GB RAM, Win7 Ultimate, Drobo USB (2x1TB, 2x320GB)

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >