Search Results

Search found 18791 results on 752 pages for 'edit lopez'.

Page 535/752 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • HP Photosmart C4780 printer/scanner breaks netgear WNDR3700v3 router on connection

    - by CodeJunkie
    A few months ago, we upgraded our Netgear router's firmware. Immediately, we started having trouble connecting to the internet. Each time this problem would happen, every device in the house would stop being able to make new connections to the internet. For example, you couldn't open new pages in the browser, but if Skype was running when the problem happened, you could keep talking to people. The only solution to this problem seemed to be resetting the router to factory defaults. Eventually, we solved this problem by just downgrading the router firmware. A while later, we got a new Netgear router. Almost right away, the new router started having exactly the same problems as the old one did on current firmware. The network connection would stay active and the computer would say it had an internet connection, but you couldn't do anything online except for using Skype. We eventually figured out that this happens every time our HP printer gets onto the network. Any time the printer gets onto the wireless network, the whole network stops connecting to the internet almost completely. The only thing that will fix it at that point is to reset the router to factory specs, and unplug the printer so it can't get back onto the network. The Netgear router has the latest version of the firmware, but the printer/scanner is very old. It looks to me like this problem is probably a result of a firmware conflict between the printer and the router, but I'm not sure how to fix that problem. Here's some additional information: Printer: HP Photosmart C4780 Router: WNDR3700v3 Router firmware: V1.0.0.22_1.0.17 (Stock, up to date firmware) Why would the printer getting on the network cause the router to not be able to access the internet correctly until it was reset? What can be done to allow the printer to be on the network without breaking the network for all other devices? Edit: One other thing that happens during this internet problem is that multiple computers in the house display an "IP conflict" message repeatedly, and extremely frequently (as often as every five to ten minutes, and every time a connection to the wireless is made).

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • User http does not have write permissions directory?

    - by dwieeb
    I have a bit of an odd set up, I think. I have groups for each domain my server hosts, and I add the user http to each domain group along with the users that should have access to the groups' domains. In my php script running from a directory 'public_html', I try creating a file: <?php $output = ""; print exec('touch test 2>&1', $output); But I get touch: cannot touch `test': Permission denied and the file is not created. But here, clearly stated, the group has all permissions on the directory: drwxrwxr-x 5 dwieeb example.com 1024 Feb 4 05:19 public_html And here are the permissions on the php file in public_html that is trying to use the exec function: -rw-rw-r-- 1 dwieeb example.com 59 Feb 4 05:19 test.php How is this possible if http is part of the example.com group (as seen from a cat on /etc/group) and the directory has full permissions for the group? ... example.com:x:1000:dwieeb,http I'm stumped. EDIT (since apparently I'm not cool enough to answer my own questions yet): Ah, I found the problem. Yes, I restarted Nginx, but the php-fpm daemon must be restarted as well when http is added to the group for my domain. On Arch Linux: rc.d restart php-fpm

    Read the article

  • PowerPoint 2007 slides are only partially converted to PDF since SP3

    - by Tim Pietzcker
    EDIT: Microsoft support has confirmed that it's a bug with PowerPoint 2007 SP3. I have recently encountered a problem with the "Save as PDF/XPS" add-in for PowerPoint 2007. When I use "Save as PDF/XPS" to create a PDF version of my presentation, some slides are only partially included in the resulting PDF file. For example, this: (download the PPTX file here) is reduced to this (in Adobe Reader X or Acrobat Pro X (both 10.1.1)): (download the PDF file here) So far, I have only encountered this with slides that contain animation elements, but which part of the elements remain in the PDF version appears not to have anything to do with the order in which the animated elements appear, so that might just be a coincidence. Update: The problem persists even if I "un-animate" the slides (removing the animation but leaving the previously animated elements intact). When viewing the affected slides in Acrobat Reader, it sometimes complains about the file containing invalid elements, and that I should complain to whoever generated the PDF file... Update 2: I have just installed Office 2007 on a new Windows 7 x64 PC. With the original Office version (12.0.4518.1014 MSO 12.0.6562.5003), a correct PDF file is generated. After installation of SP3 (12.0.6606.1000 SP3 MSO 12.0.6607.1000) a corrupt PDF file is generated. Today's Microsoft Updates (to PowerPoint version 12.0.6654.5000) haven't changed anything, by the way. Update 3: I have opened a tech support incident with Microsoft. They have confirmed the "limitation", as they called it, and it is indeed limited to 2007 SP 3 only. They are going to pass it on to the developers but they can't say when or even if a fix would be forthcoming, so I guess I'll upgrade to 2010...

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Require a very simple bash-based webserver for logging XML POST [on hold]

    - by Syffys
    As in title, it's for testing purpose and I need it to be extremely light (1 line to 1 single light file). Here is a XML query sample: XML_QUERY=$(cat <<EOF <?xml version='1.0' encoding='UTF-8'?> <Test></Test> EOF ) curl -H "Content-type: text/xml; charset=utf-8" -H "Soapaction: \"\"" -k -d "${XML_QUERY}" http://localhost:8088 Here are some of the tracks I have found so far even if I wasnt able to adapt them to work as I expect: Netcat minimal webserver: Problem is that my nc does not have the -q option, so the connection is closing before delivering the XML content Netcat Only webserver: Same as above Thanks in advance! EDIT: As it's been asked, I'm running Linux Redhat, even if the distro does not really matter and the OS implied since I'm asking a bash-based solution... Also about my topic being on hold: "Instead, describe your situation and the specific problem you're trying to solve" = I though this was exactly what I was doing, but ok I'll reword: My situation: bash environment (which can also include some standard linux tool: netcat, python or whatever) My specific problem: please see title: Require a very simple bash-based webserver for logging XML in HTTP POST for testing purpose

    Read the article

  • What should be monitored to troubleshoot file sharing problems?

    - by RyanW
    I'm running into some problems with a file share used by an ASP.NET web application. With this configuration, there are 2 web servers (win2k8 web) that connect to a file server (win2k8 enterprise), reading and writing files using a file share. Recently, one of the web servers has begun encountering an error accessing the file share: IOException: The specified network name is no longer available. There does not appear to be much info on the web for explaining what's causing this and how to best fix it, so I'm looking at what I can monitor in order to get clues. I'm not sure if it's hardware, just a load issue, file size, frequency, etc. With Windows perfmon, what can I monitor on the File Server side? There's the "Files Open" object, any other good ones? What can I monitor on the web server side? EDIT: I'll add that the UNC path uses the IP address of the file server, not a name to resolve. Also the share is a single, flat directory with over 100K files.

    Read the article

  • Content not being compressed even though I'm using zlib in php.ini

    - by Tola Odejayi
    I've edited my php.ini file so that it has these two entries: zlib.output_compression = On zlib.output_compression_level = 4 However, after restarting apache, when I request php pages, the headers returned in the response indicate that my server is still NOT serving compressed pages (here are selected headers as viewed using Chrome's Network feature): Cache-Control:no-cache, must-revalidate, max-age=0 Connection:Keep-Alive Content-Type:text/html; charset=UTF-8 Date:Mon, 17 Sep 2012 23:46:13 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Mon, 17 Sep 2012 23:46:13 GMT Pragma:no-cache Proxy-Connection:Keep-Alive Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Via:1.1 XXX-PRXY-07 X-Powered-By:PHP/5.2.17 What might I be doing wrong? Is there any other setting that I need to change? EDIT Here is another set of headers returned to another computer: Cache-Control:no-cache, must-revalidate, max-age=0 Connection:close Content-Type:text/html; charset=UTF-8 Date:Thu, 20 Sep 2012 09:45:26 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Thu, 20 Sep 2012 09:45:26 GMT Pragma:no-cache Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Vary:Cookie X-Powered-By:PHP/5.2.17

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • How to prevent asymmetric routing with multiple eBGP routers?

    - by Andy Shinn
    I have 2 routers announcing a /22 subnet to different providers (one providers connects to each of the 2 routers). I have split the /22 in two /23 to announce one /23 on each of the routers plus the /22 (the providers will take the more specific route). This allows me to fail over and keep traffic inside the /23 in and out the same provider. What are other ways in which I could announce just the /22 with both routers and have packets from servers on the network behind the routers go back out the same router in which they came in from? EDIT: The main problem I come across, which end users and clients complain about the most, is that the least hop route is sometimes not the "optimal" route. In my case, I know that Provider B may have better latency to X nation. But when packets come in from provider B, they may go out Provider A or provider B. The reverse is also true. If I send a packet to X nation out provider A, even though it may have more hops back, the packet will likely come in from Provider B (which may have higher latency, packet loss, etc. to this nation)

    Read the article

  • Fedora 15: em1 recently dissapeared and hostapd no longer serves internet to wirelessly connected devices

    - by Daniel K
    I have a laptop running hostapd, phpd, and mysql. This laptop uses an Ethernet connection to connect to the internet and acts as a wireless access point for my workplace's wifi devices. After installing some software and reconnecting my Ethernet elsewhere, my "em1" device is no longer present and wirelessly connected devices can no longer reach the internet. The software I recently installed is: pptp, pptpd, and updated some fedora libraries. I have also recently moved my desk and laptop to another location and thus had to reconnect the Ethernet elsewhere. Wifi devices no longer have access to the internet. Wirelessly connected devices are able to successfully log into the laptop, showing full strength, correct SSID, and uses the proper password. However, when I tried to connect to a site like google, the request times out. The device "em1" also no longer appears on my machine. Running: # ifup em1 will give me the following output: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device em1 does not seem to be present, delaying initialization. And running: # dhclient em1 has the following output: Cannot find device "em1" When I run # dmesg|grep renamed, I get the following: renamed network interface eth0 to p4p1. I've tried to connect to the internet through p4p1 directly from the laptop and was successful. However, my wireless devices connected to my laptop are not able to connect to the internet. I have uninstalled pptp and pptpd using # yum erase ... but the problem still persists. To install pptp I used: # yum install pptp To install pptpd I did the following: # rpm -Uvh http://poptop.sourceforge.net/yum/stable/fc15/pptp-release-current.noarch.rpm # yum install pptpd To update my fedora libraries I used: # yum check-update # yum update EDIT: Running # route produces the following results: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.200.1 0.0.0.0 UG 0 0 0 p4p1 10.11.200.0 * 255.255.252.0 U 0 0 0 p4p1 172.16.100.0 * 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • Photo managing software that supports network drives?

    - by musicfreak
    My dad is a photographer in his free time, and he's been using Lightroom to manage his photos. However, recently, we put all of our photos on a NAS drive to allow us to access them from any computer at any time. The problem with this is that Lightroom cannot load catalogs from network drives. We need support for network drives because we'd like to be able to browse the photos from any computer, and for any computer to be able to add photos to the collection. Right now we're just syncing the Lightroom catalog file between us, but the extra step is a pain, and doing it manually makes it error-prone. Is there any software (free or commercial) that has proper support for network drives? The only real feature I need is to be able to sort photos by date and by some sort of tags. I don't need any editing features like those found in Lightroom; my dad is comfortable using Photoshop to edit photos. Also, if there is another solution to this that I haven't thought of, feel free to share.

    Read the article

  • Access node.js local server though mobile via same shared wifi

    - by laggingreflex
    EDIT: I was stuck in this situation before but then it was Apache-related But this time I'm using NodeJS, so the old answer doesn't help. I'm running apache a NodeJS webserver (on port 80) on Windows 7. I want to access the webserver through my mobile which shares the wifi router with my pc locally. http://localhost works from PC. But I can't access http://192.168.1.4 from either my phone or even my computer. ipconfig /all on my computer lists my ip address as 192.168.1.4 Wireless LAN adapter Wireless Network Connection: IPv4 Address. . . . . . . . . . . : 192.168.1.4(Preferred) I can ping my phone's (internal) ip address [192.168.1.5] from PC and vice-versa, I can ping my PC [192.168.1.4] from my phone. So why can't I access http://192.168.1.4 from my phone? (or PC) Firewall is off.

    Read the article

  • Power issues Foxconn Barebones kit

    - by alpha1
    I have a Foxconn R20D2 bought about a year and a half ago. It ran fine for a while and then around last summer it started having power issues. I chalked it up to changes in electric current due to the overwhelmed grid when people turn on their AC units, but this problem has stayed for all year, shutting off randomly, shutting off when i turn on a vacuum and similar problems. Now that its summer again, the box basically sits there all day cycling itself, and now has gotten to the point it tried to boot and after 3 seconds, fails, shuts off and tried again. I know its power related, it runs opensuse linux and there are never any shutdown logs or anything of that sort. As the weather got hotter i noticed it happening more and more, and it most often happened in the morning, i presume as people woke up and turn on the AC. The power supply is a Chennel well technology co LTD model DSL-150. 150W max output. Its an intel atom dual core, with 2 sata drives, no CD/floppy etc, recently upgrades from 2 to 4gb of ram. It runs at 104 degrees Fahrenheit all the time almost. Any way i can test the power supply or anything else to try to fix it? Im a software guy, not hardware so im at a complete loss here, thanks for all assistance you can provide! EDIT: The switch on the back that says 230 or 115 is set to 230. If im in the USA, could that be causing the problems?

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • CentOS 6.5 x64 + Ajendi +nginx + php-fpm + mysql setup unable to load the PHP page

    - by Francis
    I'm using Ajendi to setup a PHP web server, I can load the PHP info page, when I try to load the PHP copy from other server, I get a blank page with this log appear on access log, I have no clue what was wrong and troubleshoot further, please advice. x.x.x.x - - [28/May/2014:10:08:37 -0400] "GET /index.php HTTP/1.1" 200 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0" 2.6.32-431.1.2.0.1.el6.x86_64 cat /etc/redhat-release CentOS release 6.5 (Final) The vhost config AUTOMATICALLY GENERATED - DO NO EDIT! server { listen *:80; server_name test.com www.test.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /www; index index.html index.htm index.php; location ~ \.php$ { alias /www; fastcgi_index index.php; include fcgi.conf; fastcgi_pass unix:/var/run/php-fcgi-php-fcgi-0.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • Comparing 2 (or 3 Files If Possible) "Line By Line"

    - by PythEch
    I want to find out the differences of 2 (or 3 files if possible) line by line. Diff utils can do this, however it gives inaccurate results. Because, 2 files have exact number of lines which is "134". But diff gives me "Added Lines" and "Removed Lines". However this is wrong, they have exact the same number of lines, there is no added or removed lines. The text files which I want to find differences of them, have only numbers written, maybe that's why that algortihm fails. I couldn't find any option to prevent that, however I may be wrong, I mean there should be an option for that, but again, I couldn't find. This is what I get (5am.txt vs 6am.txt, there is a huge problem): This is what I want (6am.txt vs 7am.txt, still has problems): But, first the first image still has this problem, at the last lines. Edit: After I figured out that there is no utility to do this, I handled myself. I almost did the same thing as what RedGrittyBrick have done. This script imitates diff utility so I (or you) can use it with diff2html. To use it with diff2html, just change line diff_stdout = os.popen("diff %s" % string.join(argv[1:]), "r") to diff_stdout = os.popen("script.py %s" % string.join(argv[1:]), "r") and name this script whatever you want: import sys f1=open(sys.argv[1],"r") f1_read=f1.readlines() f1.close() f2=open(sys.argv[2],"r") f2_read=f2.readlines() f2.close() changed={} first_c = "" for n in range(len(f1_read)): if f1_read[n]!=f2_read[n]: if first_c == "": first_c=n+1 changed[first_c]=n+1 else: first_c="" #Let's imitate diff-utils... for (x, y) in changed.items(): print "%d,%dc%d,%d" % (x,y,x,y) for i in range(x,y+1): sys.stdout.write("< %s" % f1_read[i-1]) print "---" for i in range(x,y+1): sys.stdout.write("> %s" % f2_read[i-1]) Final results:

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • Outlook Web Access and Rules

    - by Chris_K
    One of my clients would prefer that I have an email address in their domain. They run SBS 2k8 so I just monitor my email from them (and their clients) via Outlook Web Access. No POP or IMAP access, only OWA. No VPN access either, so no "real" Outlook. Just OWA. I figured I'd build an outlook rule to forward mail from that account to an account that I monitor -- that way I won't need to keep IE open all the time to monitor email. However, I just can't seem to get the dang rule to work and am hoping someone here can give me a nudge or pointer. From OWA, I click on Options - Rules and edit my current rule that kinda works. The rule is supposed to forward the email sent to me and then move it to a folder. It does move it to a folder... just never seems to forward it. The rule looks like this: Apply this rule after the message arrives where my name is in the To box redirect it to [email protected] and move it to the Forwarded to MyEmail folder except with "ALERT" in the subject As I mentioned, mail does get moved, just never redirected. I've tried "Forward" and "redirect" actions with the same results. Any suggestions?

    Read the article

  • Shared configuration for Eclipse on Debian server

    - by Joris Meys
    I've manually installed the latest Eclipse on our debian server and wanted to configure it so all users share the same configuration. It turned out less obvious than I thought: I don't seem to be able to install packages for all users. If I run it myself, all configuration data is saved under my own home directory. If I run Eclipse using sudo, everything is saved under the root directory but is not accessible for other users when they run Eclipse. I've been browsing the manual of Eclipse and some forums, but apart from a "yes, you can" I couldn't find any information on how that should be done. The biggest problem is installing plugins for all users to be found. Any help is greatly appreciated. Eclipse : 3.6.1 classic, installed using this procedure. Server uname: GNU/Linux * 2.6.26-2-amd64 Server is accessed using Putty, and Gnome desktop through realVNC. Just mentioning it if that is of any importance. Our sysadmin is on "prolonged leave" (working in Spain and never replaced), so I'm stuck without help here. EDIT : -- I asked this question also on StackOverflow as I wasn't certain this is a genuine server-related question. Please feel free to merge both questions at the appropriate place. --

    Read the article

  • Toshiba Satellite P755D USB 3.0 Drivers Missing - Windows 7 Professional

    - by nicorellius
    I bought a Toshiba Satellite P755D recently and installed Windows 7 Professional on the machine. It runs great. But I noticed the exclamation point in the yellow triangle icon in the Device Manager next to the Universal Serial Buss (USB) Controller (I'm assuming this is the USB 3.0 controller because mine doesn't recognize devices). Normally, when this kind of thing happens I go to the manufacturer's website and download appropriate drivers and call it a day. But not this time... I browsed to my model and found no driver for the USB 3.0 controller. I tried other HW and Utility drivers, thinking they would be bundled. No luck. I tried looking up the motherboard in my machine. Generic name, no luck. I then called Toshiba technical support and they tried basic troubleshooting, eg, uninstall device, reboot, for auto-installation; no luck. I popped the Windows 7 disk back in and tried to get information that way, no luck. Finally, the technical support guy said he would look into the engineer's system to see if there was a specific driver available and that's where I'm at. The technician told me that these USB 3.0 drivers come within the native driver pack in windows but that doesn't seem to be the case. Any ideas? EDIT - See attached screen shots.

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Making "default saved" work with GRUB2...?

    - by baltusaj
    I just installed Moblin Operating System. It's using GRUB2. On my Ubuntu 8.04 GRUB 0.97 was being used in which i was using the default saved option comfortably. I found that with GRUB2 i should not edit /boot/grub/menu.lst directly but I did :) because my Moblin does not contain any /etc/default/grub where they say I should do the modification I want. So what I did is as following which did not work: default=saved timeout=1 #splashimage=(hd0,0)/boot/grub/splash.xpm.gz #hiddenmenu #silent title Moblin (2.6.31.5-10.1.moblin2-netbook) root (hd0,0) kernel /boot/vmlinuz-2.6.31.5-10.1.moblin2-netbook ro root=/dev/sda1 vga=current savedefault=1 title Pathetic Windows rootnoverify (hd0,1) chainloader +1 savedefault=0 By doing so I should have automatically switch between Moblin and Window at each boot but it's not working. Almost all the troubleshooters on internet are saying that I should enable the DEFAULT=save option in /etc/default/grub but I am unable to find this file. Any idea what else should I do? Thanks a lot Update: I used the equal to sign because by default my menu.lst had an entry as default=0. However, default 0, is also working fine. Moreover the menu.lst, i have is actually a symbolic link to ./grub.conf. I have also noticed that grub-intall and grub-set-default commands are not working.

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >