Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 813/937 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Allow opening a new tab with Ctrl+T on all websites in Firefox

    - by Martin J.H.
    In Firefox, certain websites and plugins (Adobe PDF Plugin) appear to "capture" the Control key, so that when I try to open a new tab using "Ctrl+t", nothing happens - or worse, something unexpected happens. Examples: On the Codecademy site, while editing code, Ctrl+T either does nothing, or (when Flash is disabled) switches the position of the two characters next to the cursor. When viewing PDF's with the Adobe PDF Plugin, Ctrl+T does nothing. Is there a way to disable this "feature"? I would like "Ctrl+t" to always "talk" to Firefox! Edit: After searching superuser deeper, this question is very similar to the questions: "How to prevent keystroke grabbing/hijacking by websites in Firefox?" "How do I prevent pages I visit from overriding selected Firefox shortcut keys?". The answers to these questions are interesting and relevant, but do not give a method on how to disable combinatinos such as "Ctrl+t". Maybe a modified Greasemonkey script is the easiest solultion. Edit 2 - Attempt at a solution The following UserScript (Use GreaseMonkey to install it) successfully captures Ctrl+t on some sites (Google Search site, for instance - PopUp "Gotcha" appears), but not on the Codecademy site. I found another question pertaining to this subject here: "How to forbid keyboard shortcut stealing by websites in Firefox". It was raised in 2010, and the consensus was: It can't be done. // ==UserScript== // @name Disable Ctrl T interceptions // @description Stop websites from highjacking keyboard shortcuts // // @run-at document-start // @include * // @grant none // ==/UserScript== // Keycode for 't'. Add more to disable other ctrl+X interceptions keycodes = [84]; var lastPressedButton = [0]; document.addEventListener('keydown', function(e) { //uncomment to find out the keycode for any given key // alert(e.keyCode ); if (keycodes.indexOf(e.keyCode) != -1 && e.ctrlKey) { e.cancelBubble = true; e.stopImmediatePropagation(); alert("Gotcha!"); } return false; });

    Read the article

  • Windows Server NTFS volume list file name encodings and any illegal file names

    - by benbradley
    I'm having to deal with a Windows Server (NTFS) file server and our backup application appears to be failing with certain files. According to this https://en.wikipedia.org/wiki/NTFS#Internals NTFS apparently supports file names encoded in UTF-16 but according to their support team, our backup application only supports UTF-8. I'd like to confirm whether this is actually the problem by seeing the file name encoding for myself. The files that are failing appear to be using plain English A-Z letters and other ASCII characters. No accents or non-English letters etc. I suppose even though the letters appear to be plain A-Z the file name could still be encoded in UTF-16. Does anyone know of a utility or script that can recursively go through all files in a directory and show the encoding of the file name? Then I could try renaming to UTF-8 to see if the backup can proceed. I'm not a Windows developer so can't write this up myself. Presumably the encoding of the file name should be stored in the FS somewhere and therefore it should be possible to expose this.

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • Enabling CURL on Ubuntu 11.10

    - by Afsheen Khosravian
    I have installed curl: sudo apt-get install curl libcurl3 libcurl3-dev php5-curl and I have updated my php.ini file to include(I also tried .so): extension=php_curl.dll To test if curl is working I created a file called testCurl.php which contains the following: <?php echo ‘<pre>’; var_dump(curl_version()); echo ‘</pre>’; ?> When I navigate to localhost/testCurl.php I get an error: HTTP Error 500 Heres a snippet from the error log: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/php_curl.dll' - /usr/lib/php5/20090626+lfs/php_curl.dll: cannot op$ PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/sqlite.so' - /usr/lib/php5/20090626+lfs/sqlite.so: cannot open sha$ [Sun Dec 25 12:10:17 2011] [notice] Apache/2.2.20 (Ubuntu) PHP/5.3.6-13ubuntu3.3 with Suhosin-Patch configured -- resuming normal operations [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/` Can anyone help me to get curl working? The problem was with the original test code. I used a new test file containing this and curl is now working: <?php ## Test if cURL is working ## ## SCRIPT BY WWW.WEBUNE.COM (please do not remove)## echo '<pre>'; var_dump(curl_version()); echo '</pre>'; ?>

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • Administrator view all mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. ServerFault has an answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • Administrator view ALL mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. I have seen this answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • Reliable applicance for routing IT emergency calls (SIP and ISDN)

    - by chiborg
    We have a fairly big IT installation and our IT staff needs to be reachable 24/7. At the moment we have the following setup for "emergency" calls to our IT staff on our main Asterisk box: An incoming emergency number (connected via SIP trunk and a BRI card in case the SIP trunk goes down). When the number is called during the office hours, all the SIP phones of the IT staff are called simultaneously. When the number is called out of office hours interface, a list of mobile phone numbers is called, one after another until someone picks up. The list can be changed by the IT staff via command line script. The setup works well, but the Asterisk is heavily used in a call center, has experienced some outages and misconfigurations, each of them bringing down the IT emergency number. So we'd like to put the IT emergency call functionality on a separate device. This does not need to be a big server, it even does not need to be Asterisk, it only has one purpose and should do it reliably. It should be very low-maintenance. Any suggestions for hard- and software?

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • administrator user unable to login, suspicious user accounts "sky$", "admin$"

    - by mks
    I have a Windows 2008 R2 Standard (64 bit) running in a virtual machine. Suddenly from yesterday onwards I am not able to login as administrator. Nobody changed the password. Both in the console as well as using remote desktop I am unable to login. Whenever I login as Administrator I am getting this error: "The user name or password is incorrect" Nothing has changed in the machine and I have logged in the past successfully both through console and via remote desktop several time on the same machine. One strange behaviour I noticed is, I am seeing some additional user accounts if I try to login as other user. The suspicious user account are: sky$ admin$ SUPPORT_388945a0 Is it created by some malware/virus? Or is it some windows hidden account? Microsoft site says that SUPPORT_388945a0 is: The Support_388945a0 account enables Help and Support Service interoperability with signed scripts. This account is primarily used to control access to signed scripts that are accessible from within Help and Support Services. Administrators can use this account to delegate the ability for an ordinary user, who does not have administrative access over a computer, to run signed scripts from links embedded within Help and Support Services. These scripts can be programmed to use the Support_388945a0 account credentials instead of the user’s credentials to perform specific administrative operations on the local computer that otherwise would not be supported by the ordinary user’s account. When the delegated user clicks on a link in Help and Support Services, the script executes under the security context of the Support_388945a0 account. This account has limited access to the computer and is disabled by default. However I am not sure from where this "admin$" and "sky$" came. Anyone has similar experience?

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Create a mailbox in qmail, then forward all incoming message to Gmail

    - by lorenzo-s
    I needed to let PHP send mails from my webserver to my web app users. So I installed qmail on my Debian server: sudo apt-get install qmail I also updated files in /etc/qmail specifing my domain name, and then I run sudo qmailctl reload and sudo qmailctl restart: /etc/qmail/defaultdomain # Contains 'mydomain.com' /etc/qmail/defaulthost # Contains 'mydomain.com' /etc/qmail/me # Contains 'mail.mydomain.com' /etc/qmail/rcpthosts # Contains 'mydomain.com' /etc/qmail/locals # Contains 'mydomain.com' Emails are sent without any problem from my PHP script to any email address, using the standard mail PHP library. Now the problem is that if I send mail from my PHP using [email protected] as sender address, I want that customer can reply to that address! And possibly, I want all mails sent to this address should be forwarded to my personal Gmail address. At the moment qmail seems to not accept any incoming mail because of "invalid mailbox name". Here is a complete SMTP session I established with my server: me@MYPC:~$ nc mydomain.com 25 220 ip-XX-XX-XXX-XXX.xxx.xxx.xxx ESMTP HELO [email protected] 250 ip-XX-XX-XXX-XXX.xxx.xxx.xxx MAIL FROM:<[email protected]> 250 ok RCPT TO:<[email protected]> 250 ok DATA 554 sorry, invalid mailbox name(s). (#5.1.1) QUIT I'm sure I missing something related to mailbox or alias creation, in fact I did nothing to define mailbox [email protected] anywhere. But I tried to search something on the net and on the numerous qmail man pages, bot I found nothing.

    Read the article

  • Looking for an application to record audio and video on a linux "embedded" device

    - by Luke404
    I am working with a linux x86 device with limited CPU resources (as a prototype we just use a pentium-m netbook). We'd like to record video from one V4L2 device (we'll probably end up using just USB Video Class devices like all modern webcams) and one audio stream from an ALSA source. The thing will not have screen and keyboard, and obviously no X11 environment. Goals are: do as little work as possible to cope with little cpu resources - for example I'd like to record video in the native MJPEG I get out of the UVC devices encoding audio to MPEG3 Layer-2 (aka mp2) is ok since it let us save a lot of space (compared to raw pcm samples) and does use little cpu power I don't mind loosing some video frames here and there (UVC devices do that) as long as I can get audio and video streams syncronized not require user input to start the thing (a python script takes care of initialization, startup, shutdown, etc...) be able to open the resulting files for postprocessing without too much effort (ie, if mplayer or vlc can play it, it's fine) So far the only app I found that could be started from command line and record V4L2 video + ALSA audio is mencoder but I'm having some difficulties with it. It should be able to do that but I cannot record audio and video together - just one of the two. And if I use two different processes to record to two different files I have no means to get them in sync (audio is more or less always correct, but video framerate will vary over time and it seems to lack timestamps to correctly play it back to the correct time). Long story short, how do you record an unconverted MJPEG stream (from an UVC device) and an audio stream (from an ALSA device, possibly encoding to any standard format) using a command line tool, to a single file (MPEG or any other container), keeping audio and video in sync?

    Read the article

  • How to email photo from Ubuntu F-Spot application via Gmail?

    - by Norman Ramsey
    My father runs Ubuntu and wants to be able to use the Gnome photo manager, F-Spot, to email photos. However, he must use Gmail as his client because (a) it's the only client he knows how to use and (b) his ISP refuses to reveal his SMTP password. I've got as far as setting up Firefox to use GMail to handle mailto: links and I've also configured firefox as the system default mailer using gnome-default-applications-properties. F-Spot presents a mailto: URL with an attach=file:///tmp/mumble.jpg header. So here's the problem: the attachment never shows up. I can't tell if Firefox is dropping the attachment header, if GMail doesn't support the header, or what. I've learned that: There's no official header in the mailto: URL RFC that explains how to add an attachment. I can't find documentation on how Firefox handles mailto: URLs that would explain to me how to communicate to Firefox that I want an attachment. I can't find any documentation for GMail's URL API that would enable me to tell GMail directly to start composing a message with a given file as an attachement. I'm perfectly capable of writing a shell script to interpolate around F-Spot to massage the URL that F-Spot presents into something that will coax Firefox into doing the right thing. But I can't figure out how to persuade Firefox to start composing a GMail message with a local file attached. Any help would be greatly appreciated.

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • Using modem for sending voice recording

    - by ircmaxell
    I've got an interesting one for you. I've been going over my server monitoring and notification systems (Nagios based), and realized that if our internet connection goes down, there's no way for it to notify me. I already have a modem listening (Via CentOS 5) on a spare POTS line so that I can dial-in in case our internet goes down. I was wondering if I could come up with a script (Shell, Python, etc) that can dial out and play a recorded message (wave file I'm guessing) when it's picked up. I know Windows supports voice calls over a voice modem, I was wondering if a solution existed for Linux... I know asterisk can probably do it, but isn't that overkill (A full blown VOIP system just for a notification mechanism that will hopefully never be used)? And wouldn't it interfere with the modem's primary function as a backup network interface (PPP spawned via mgetty)? I've done some searching, and haven't really come up with much. I know how to dial out from the command line, but only as a modem (not as voice). Worst case, I could set it up to dial out as a modem, and then just realize that if I get a call with modem sounds from that number that it's the notification... Any insight would be appreciated...

    Read the article

  • Migrating from Exchange 2003 to 2010 UID changes from 32 characters to 64 characters

    - by Seth
    We have built a custom CRM tool that integrates with Exchange 2010 using Exchange Web Services. The issue we are encountering revolves around editing appointments through the CRM tool that were created in exchange 2003. We have migrated the sales staff from Exchange 2003 to 2010 so that we could use EWS. EWS works great except for appointments that were created prior to the migration. Those appointments created prior to the migration in Exchange 2003 cannot be modified using EWS. The reason is that the ExchangeItemUID for the appointment changed from 32 characters to 64 characters. EWS does not recognize ExchangeItemUIDs that are 32 characters. We are looking for a solution that will allow us to modify these appointments. We are open to ideas of running a script that will update all appointment events for the sales people so that 2003 appointments are converted to 2010 format. We are also open to alternate IDs as opposed to using UID. I have seen some references to using CleanGlobalObjectID, but I don't see that property in EWS. Has anyone encountered this problem before? Any help you could give would be greatly appreciated!

    Read the article

  • How to disable "safely remove hardware"

    - by Matt
    I have some windows 7 virtual machines in xen that have devices showing up in "safely remove hardware". I don't want users to ever be able to remove/eject any hardware at all. I'm told vmware has a hotplug option. xen doesn't seem to provide this for pci passthrough devices, therefore I'm looking for a reliable solution to prevent users from ejecting devices. This issue is not necessarily related just to virtual machines but seems to be a common problem with devices that get wrongly reported as removable. I'm ideally looking for a way to prevent all devices from appearing or just prevent the safely remove hardware option from ever coming up. I've tried setting device capabilities for specific devices on boot with a script but this for some reason doesn't always seem to work reliably. Is there a way to prevent this icon from appearing in the notification area completely, either by registry key or group policy? I should point out that setting this in group policy to "Administrators" did not seem to work. [Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesSecurity Optionsevices:Allowed to format and eject removable media]

    Read the article

  • Debian - starting UFW (Uncomplicated Firewall) before network interfaces are operational

    - by Tomasz Zielinski
    I want to install UFW on Debian Lenny. Everything looks straightforward except that I don't know where to plug UFW startup script so that it configures iptables before hax0rs can break in. I've reviewed runlevel directories and in /etc/rc0.d, /etc/rc6.d and /etc/rcS.d there are items like these: S35networking -> ../init.d/networking S36ifupdown -> ../init.d/ifupdown Runlevel 0 and 6 are for shutdown and reboot so I guess nothing should be changed there, but runlevel S advertises itself (in README) like something for me: The scripts in this directory whose names begin with an 'S' are executed once when booting the system, even when booting directly into single user mode. The following sequence points are defined at this time: * After the S40 scripts have executed, all local file systems are mounted and networking is available. All device drivers have been initialized. (What bothers me is that both rc0/6.d and rcS.d point to the same networking and ifupdown scripts, but after looking at sources I believe those scripts are smart enough to figure out where to start and where to stop networking.) Now, I think that I should plug my /lib/ufw/ufw-init into /etc/rcS.d, with priority higher that the one of ifupdown and networking, i.e. <= 38 for my /etc/rcS.d. Am I right in this "analysis" ?

    Read the article

  • Batch file to create many files with special characters

    - by MollyO
    Essential info: I have a file "DB_OUTPUT.TXT" with 304 lines that I need to turn into 304 files (one per line). Each line contains many special characters and may be up to tens of thousands of characters long. For these reasons, I'm having difficulty using a cmd.exe batch file (which limits the amount of input) and the echo command (which would try to execute each special character, short of me having to escape them all). I also have a file "DB_OUTPUT_FILENAMES.TXT" containing a distinct filename for each line-soon-to-be-file from "db_output.txt". So line 1 of DB_OUTPUT.TXT needs to be the body of a new file with a name equal to line 1 of DB_OUTPUT_FILENAMES.TXT. Extra info: As you may have guessed, DB_OUTPUT.TXT is output from a database; it contains 304 records with 6 or 7 columns at a fixed width with the last column being a SQL query. Each of these lines (db records) will be used as a script to create new database objects, which is why the special characters need to be preserved. Question: Is there a way to do this in a batch-like fashion? I'd be happy with either a Windows solution or a Linux one.

    Read the article

  • Persistent routes for DD-WRT PPTP VPN client

    - by Tim Kemp
    My home network in the USA is behind a Buffalo router (G300NH) running their version of DD-WRT. I use the built-in PPTP VPN client to connect to a VPN provider in the UK. I route certain traffic over the VPN (so it has a UK source address, for various entirely legal reasons) which I achieved by following the instructions in the DD-WRT docs and my VPN provider's own instructions. I placed two commands like this in the firewall script: route add -net xxx.xxx.0.0 netmask 255.255.0.0 dev ppp0 route add -net yyy.yyy.0.0 netmask 255.255.0.0 dev ppp0 I didn't put any of the iptables rules in since it my setup doesn't seem to need them. It works like a charm. Traffic to the xxx subnets goes over the VPN, everything else goes out over my ISPs own pipes. The problem comes when the VPN drops, which it does occasionally. DD-WRT does a fine job of reconnecting it automatically, but the routes are trashed every time that happens. How do I automate the process of re-establishing my routes? I thought about static routes, but the IP address of the VPN connection is dynamically assigned (which is why I'm using dev ppp0). Many thanks, Tim

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >