Search Results

Search found 17041 results on 682 pages for 'black box'.

Page 542/682 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Visual Studio 2005 won't install on Windows 7

    - by Peanut
    Hi, My question relates very closely to this question: http://superuser.com/questions/34190/visual-studio-2005-sp1-refuses-to-install-in-windows-7 However this question hasn't provided the answer I'm looking for. I'm trying to install Visual Studio 2005 onto a clean Windows 7 (64 bit) box. However I keep getting the following error when the 'Microsoft Visual Studio 2005' component finishes installing ... Error 1935.An error occurred during the installation of assembly 'policy.8.0.Microsoft.VC80.OpenMP,type="win32-policy",version="8.0.50727.42",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="x86",Please refer to Help and Support for more information. HRESULT: 0x80073712. On my first attempt to install VS 2005 I got a warning about compatibility issues. I stopped at this point, downloaded the necessary service packs and restarted the installation from the beginning. Every since then I just get the error message above. I keep rolling back the installation and trying again ... it's but always the same error. Any help would be very much appreciated. Thanks.

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • Permissions won't cascade more than 1 level

    - by Jovin_
    Running Windows Small Business Server 2011 I have a file structure with a lot of sub folders (sometimes 5-6 levels deep). I have created access groups to grant access to my users, and also deny groups to deny access to others. X Access & X Deny. These allow or deny access to a mapped network drive X: On the server I put in the groups with Full Control Allow for X Access and Full Control Deny for X Deny, I also tick the box "Apple these permissions to objects and/or containers within this container only" and have ensured that "Apply to:" is "This folder, subfolders and files". But for some reason the permissions will only apply to the next level of folders & files. ex. structure: X: Folder 1 Folder 1a Folder 2 Folder 2a If I apply the permissions to X: it'll only go to Folder 1 & 2, not 1a and 2a, I then need to manually apply the permissions to these too. Is this working as intended or am I doing something wrong?

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /mychosendir/cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • email dropbox between two mutually untrusted sites

    - by user52874
    I've an interesting problem that I thought was straightforward, but turns out I think I'm whistling down the wrong path. It has to do with (shudder) email. I thought I was done with needing to know about email guts ten years ago; I was wrong. Anyway. Simply put, I need to figure out how to relay outgoing email that is not targetted in our domain from our domain into a 'dropbox' in a DMZ, and the Other Guys can retrieve that email from their side of the DMZ and distribute it accordingly, even out to the public internet if need be. There will be no [un-established] traffic coming back to Our side from anywhere; any attempts to do so are dropped with malicious prejudice. Our side is postfix running on scilinux6.1. The DMZ boxes are redhat5.4. The Other Guys are M$ Exchange. The firewalls are set up such that data can go from Our Side downsec to the DMZ, but not upsec from the DMZ into Our Side. Same for the Other Guys. My first thinking was simply to set up postfix on a box in the DMZ and tell them to set up fetchmail or whatever the M$ equivalent is, but then I started remembering that postfix wants to actively relay email onwards, rather than hold it and wait for someone to 'reach in' and retrieve it. I'm not sure I've explained this well, but hopefully it's clear enough that someone can point me in the right direction. I seem to remember having done this before, but it was a looong time ago. thanks!

    Read the article

  • Strange corruption saving from Textpad 5 within Windows 7-64 VirtualBox VM to shared folder with Mac host

    - by joelarson
    I have a fairly new Window-7 64bit install running in Virtual Box on a MacBook Pro. I'm using TextPad 5 within that environment to edit source files that live on a shared folder that is on the Mac Host. When I save some of these source files, the saved file ends up with some amount of the end of the file repeated one or more times. For example, a file that has this at the end: ... return ttp; }; would, once saved, open up with: ... return ttp; }; }; It is definitely a problem with how the file gets written as opposed to how it's read, because I can see this now matter what app I use to open the file with (NotePad & Word in Windows 7, TextWrangler back in the Mac). I've tried saving as ANSI and UTF-8, and with or without the 'Write Unicode and UTF-8 BOM' checked in TextPad preferences. It doesn't happen with all files though I can't see any pattern about which files do or don't have the problem. It doesn't happen with files written to the Windows 7 c:\ drive. And so far it doesn't happen from other applications saving files, only TextPad. Any ideas? My versions: Textpad 5.4.2 Windows 7 Professional 64-bit, fully up to date VirtualBox 4.0.8 r71778 OSX 10.6.7

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • Cannot establish a network connect to VMWare Fusion VM

    - by twolfe18
    I am running Snow Leopard 10.6.2 (not the server edition) with VMWare Fusion 3.0.0 and I trying to get my Ubuntu 9.10 x86_64 VM working. I am using a bridged connection, and I have an internet connection FROM the Ubuntu VM (I can download updates, ping websites, etc), but I cannot connect TO the Ubuntu box from any other device on my network. I am trying to get Mongrel up on the Ubuntu VM for some Rails stuff, but it's not working. I know Mongrel/Rails is not the problem because if I start the server on the Ubuntu VM, background the process, and then wget the index page, it works. I just cannot connect to the site from another IP. I have tried using a static IP and a DHCP IP configuration on the Ubuntu VM, neither work (for incoming connections, both work for outwards). I have port scanned the Ubuntu VM, and it appears that all ports are closed. However, the Ubuntu VM does respond to pings. I noticed a similar question here: http://serverfault.com/questions/99757/setting-up-a-bridged-network-with-vmware-fusion, but no answer. Any ideas?

    Read the article

  • Is there a reason to use internal DNS over 8.8.8.8 ?

    - by skylarking
    I've inherited a LAN where there is really no name resolution being done for local resources... i.e. all users enter IP addresses manually to access printers and network shares. There are no LDAP servers or domains either....workstations simply connect to the network without authentication. DHCP is handled via a core switch... And DNS settings are also handed out by this same core switch. Currently, the DNS assignments are as such, and in this order: 10.1.1.50 / old Pentium III Windows 2003 box running DNS service- 128 MB RAM 169.200.x.x / ISP 4.2.2.2. / the well known public one There a couple thousand clients on the LAN....and most of the activity is web browsing ( this is an educational setting ). First of all, the server seems woefully underpowered for this task...yet there is virtually no slowness when web surfing by clients.... How much horsepower should a heavily used DNS server have ? I have also heard using 4.2.2.2 is a bad idea .... since it has been so overused... Finally, wouldn't it make sense to have a robust external DNS server listed first? ( Google's 8.8.8.8 would seem to be a logical candidate )

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • How can I "share" a network share over the internet to multiple operating systems?

    - by Minsc
    Hello all, We have a network share accessible through our intranet that is widely used. This share has it's own set of fine tuned permissions. I have been tasked with allowing A.D. authenticated access to this share over the internet without the use of VPN. The internet access has to mimic the NTSF permissions in place on the share. Another piece of the puzzle is that the access over the internet has to allow perusal of the share from Windows and Mac OS systems. I had envisioned a web front end that would facilitate downloading to and uploading from the share via a web browser. I'm trying to ask for some suggestions about what type of setup is necessary to achieve this. I've done loads of testing and searching for solutions but I can't seem to get anything to work as I hope. The web server that will be handing all of this is a Windows 2K8 box with IIS 7. How can I allow the users to authenticate against Active Directory when coming from the internet even when coming from a Mac system? I hope my question is not too broad, I'm sorry if I should have broken it up into multiple questions. It all is just tied together in my head. Thank you all for your time and aid.

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • glib2 64-bit compile fails on Solaris 10

    - by Aaron
    I'm encountering a problem building glib-2.26.1 on a Solaris 10 box - 64-bit. Goo diligence doesn't turn anything up, but no matter what I do the build fails in the same way. I've tried using the Sun Studio compiler, gcc (SFW) to no avail. When I compile I get the following error: [root@foo glib-2.26.1]$ export CC=/opt/solstudio12.2/bin/cc [root@foo glib-2.26.1]$ export CFLAGS="-m64" ...configure goes normally... [root@foo glib-2.26.1]$ make ...snip... source='gatomic.c' object='gatomic.lo' libtool=yes \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ /bin/bash ../libtool --tag=CC --mode=compile /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c -o gatomic.lo gatomic.c libtool: compile: /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c gatomic.c -KPIC -DPIC -o .libs/gatomic.o "gatomic.c", line 885: warning: no explicit type given "gatomic.c", line 885: syntax error before or at: * "gatomic.c", line 885: warning: old-style declaration or incorrect type for: g_atomic_mutex "gatomic.c", line 906: warning: implicit function declaration: g_mutex_lock "gatomic.c", line 909: warning: implicit function declaration: g_mutex_unlock "gatomic.c", line 1155: warning: implicit function declaration: g_mutex_new "gatomic.c", line 1155: warning: improper pointer/integer combination: op "=" cc: acomp failed for gatomic.c make[4]: *** [gatomic.lo] Error 1 make[4]: Leaving directory `/root/glib-2.26.1/glib' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/root/glib-2.26.1/glib' make[2]: *** [all] Error 2 make[2]: Leaving directory `/root/glib-2.26.1/glib' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/glib-2.26.1' make: *** [all] Error 2 Does anyone know where the build might be going wrong? Not sure where else to look here. Thanks.

    Read the article

  • No external src ip in log files (my router ip appears instead)

    - by bongo_fury
    I recently retired my workhorse WRT54G router/AP in favor of a Linksys EA2700. Since then, all inbound traffic (bound to an Ubuntu 10.02 box running LAMP)logged to Syslog, Apache's error and access logs, etc. (all behind said router) is getting logged with a src ip of 192.168.1.1, that of the router's internal ip. For example, here is an old entry from apache's access.log: 74.82.68.20 - - [22/Feb/2011:10:14:34 -0600] "GET /assets/css/style.css HTTP/1.1" 304 154 "http://example.com/view.php?event_id=1" "BlackBerry8520/5.0.0.822 Profile/MIDP-2.1 Configuration/CLDC-1.1 VendorID/100" And here is one since switching the router: 192.168.1.1 - - [05/Oct/2012:21:29:25 -0500] "GET /somedir/print.css HTTP/1.1" 200 650 "http://example.com/somedir/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"** That first field is the problem. Each and every entry in every log shows an "external" IP of 192.168.1.1, which isn't very helpful. Any ideas? Much thanks from a n00b!

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Quick, Linux-compatible unit-aware calculator

    - by endolith
    I want to be able to press a keyboard combination, start typing a mathematical expression that includes units and slightly advanced math (not just a four-function calculator), and get a result immediately, in units that I specify, that I can copy and paste. Currently I open Firefox and press Ctrl+K, type in the search box, and it usually gives me a result in the drop-down from Google Calculator. It doesn't always, though, so I press "=" at the end, wait for a result, remove the equals, wait for a result, realize it doesn't understand the way I typed a unit, open the result in a new tab, etc. it sucks. Wolfram Alpha is smarter, but very much slower, and the output is all images, not text, and I don't have a quick widget for it, if such a thing could even exist. GNU units has a ton of units, which is great, and I can define my own units, which is great, but they have to be written in specific, unintuitive ways, it doesn't handle much advanced math, and I'd need to open a terminal, start units, etc. I hate the command line. I wasted a lot of time trying to make front-ends for units in Deskbar and Launchy, but I'm not a real coder and I don't use either of those anymore. Any other solutions or enhancements of these?

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • How to eliminate overscan on Ubuntu HTPC

    - by Norman Ramsey
    I'm using an Ubuntu box with Nvidia graphics card as an HTPC. My HDTV is a Sony Bravia KDS-R50XBR1; this is a rear-projection unit with many inputs. I am using the HDMI input. I'm using the proprietary Nvidia drivers and they recognize the 1080x1920 resolution just fine. The display is a little fuzzy but at 50 inches it's perfect for movies. My problem is that the TV has three overscan settings, and none of them reduces overscan to zero. When I was using dual-screen this was fine, but I'm moving to where the TV is my only screen, and the Gnome panels are not visible because of the overscan. I'd like to figure out how to eliminate the overscan, without trying to scale my 1080p content down to 920p or something ridiculous like that. Ideally there would be some scurvy trick, perhaps involving the TV service menu, to get rid of the overscan on the TV side. Or I could move the Gnome panels, but I still would be missing the edges of my movies. Suggestions most welcome.

    Read the article

  • How do I upgrade the BIOS to boot the motherboard when the CPU is not suported?

    - by Matt
    So I have a Tyan S8225 motherboard with a Valencia (Opteron 4200) CPU. Two of us have tried everything. We have even swapped the whole motherboard, Power supply, memory, even a different CPU (still Valencia). We even tried it without any memory installed and with only a single CPU. There are no beep codes as I suspect even those are controlled by the BIOS boot process. We put the whole thing on the bench with just the power supply, VGA, keyboard and network (for IPMI) connected. The IPMI is not even showing the BIOS starting, but the IPMI is working. After some hunting around, I discovered the claim on the website is that the Valencia CPU's are not supported on older BIOS revisions. For a start, I don't know what the bios revision is and if it's older but it's the only thing left. Could the BIOS be causing a board not to boot at all? If that's the case, then is there any other way to update the BIOS without buying an old CPU only to be put back in a box just to update the BIOS? Yes, we even tried updating the BIOS through IPMI but you can't do that either.

    Read the article

  • How to configure DNS server to forward queries about particular domain AND all of its subdomains

    - by user71061
    I have DNS server (linux box with bind9), which is authorative for some domains, and forward all other queries to external DNS server of my ISP provider. So far no problem. Now I want that queries about some specific domains were forwarded to my internal DNS server, f.e.: zone "some_domain" { type forward; forwarders { some_internal_dns_ip; }; }; So far still no problem, all works ok. But then, I want also to forward some reverse DNS queries to my internal DNS. So, I have added: zone "16.172.in-addr.arpa" { type forward; forwarders { some_internal_dns_ip; }; }; And this doesn't work as I expect. Queries about "16.172.in-addr.arpa" (for example 1.16.172.in-addr.arpa) are resolved correctly, but reverse queries about full address (for example 1.1.16.172.in-addr.arpa) are not. I understand that my server should use here some recursive query, but could not configure it. I have already tried adding following options recursion yes; allow-recursion { 127.0.0.1; }; allow-recursion-on { 127.0.0.1; }; but with no success . (I have used loopback address here, because I need this functionality only for my DNS host, and not for its clients) Any suggestions?

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >