Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 664/937 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • Remove Audio stream from XVID files

    - by Kyle Brandt
    I have a bunch of Xvid files that each have an audio stream that I do not want. How can I strip the audio track I don't want using the Linux command line? I don't need the whole script (loop), just what command I would use to process each avi file individually (unless the cmd itself has batch modification built into it). I don't believe the file is in an mkv container, as mkvinfo doesn't find anything. Here is part of mplayer's output (thanks ~quack): [aviheader] Video stream found, -vid 0 ID_AUDIO_ID=1 [aviheader] Audio stream found, -aid 1 ID_AUDIO_ID=2 [aviheader] Audio stream found, -aid 2 VIDEO: [XVID] 512x384 12bpp 25.000 fps 1013.4 kbps (123.7 kbyte/s)

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • "Can't create table" when having to many partitions

    - by Chris
    I am currently having a problem I dont understand. Wherever I look it says mySQL (5.5) / InnoDB doesnt have a table limit. I wanted to test the InnoDB compression and was about to create an empty copy of an existing table and ran into the following problem. this one works: CREATE TABLE `hsc` ( LOTS OF STUFF ) ENGINE=InnoDB CHARSET=utf8 PARTITION BY RANGE (pid) SUBPARTITION BY HASH (cons) SUBPARTITIONS 2 (PARTITION hsc_p0 VALUES LESS THAN (10000) , PARTITION hsc_p1 VALUES LESS THAN (20000) , PARTITION hsc_p2 VALUES LESS THAN (30000) , PARTITION hsc_p3 VALUES LESS THAN (40000) , PARTITION hsc_p4 VALUES LESS THAN (50000) , PARTITION hsc_p40 VALUES LESS THAN (4000000) ); this one doesn't: CREATE TABLE `hsc` ( LOTS OF STUFF ) ENGINE=InnoDB CHARSET=utf8 PARTITION BY RANGE (pid) SUBPARTITION BY HASH (cons) SUBPARTITIONS 2 (PARTITION hsc_p0 VALUES LESS THAN (10000) , PARTITION hsc_p1 VALUES LESS THAN (20000) , PARTITION hsc_p2 VALUES LESS THAN (30000) , PARTITION hsc_p3 VALUES LESS THAN (40000) , PARTITION hsc_p4 VALUES LESS THAN (50000) , PARTITION hsc_p5 VALUES LESS THAN (75000) , PARTITION hsc_p6 VALUES LESS THAN (100000) , PARTITION hsc_p7 VALUES LESS THAN (125000) , PARTITION hsc_p8 VALUES LESS THAN (150000) , PARTITION hsc_p9 VALUES LESS THAN (175000) , PARTITION hsc_p40 VALUES LESS THAN (4000000) ); ERROR 1005 (HY000): Can't create table 'hsc' (errno: 1) Its reproducable by removing the number of partitions and adding them again. it does not have to do anything with the name of the table as i tried various names. there is also enough empty space on the HDD. /dev/simfs 230G 26G 192G 12% /var/lib/mysql.mnt There should be no limit on the partitions http://dev.mysql.com/doc/refman/5.5/en/partitioning-limitations.html Maximum number of partitions. The maximum possible number of partitions for a given table (that does not use the NDB storage engine) is 1024. This number includes subpartitions. i have increased both open_files show variables where variable_name LIKE '%open_files%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | innodb_open_files | 512 | | open_files_limit | 1536 | +-------------------+-------+ No change. Any clues where should I start looking? UPDATE: the whole thing is running in an openvz environment. i saw in users_beancounters that the numflock was a problem, so i increased it. but the problem still persists. maybe this helps: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515011 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 515011 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 200: kmemsize 9309653 13357056 14372700 14790164 0 lockedpages 0 1008 2048 2048 0 privvmpages 675424 686528 1048576 1572864 0 shmpages 33 673 21504 21504 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 49 90 240 240 0 physpages 243761 246945 0 9223372036854775807 0 vmguarpages 0 0 1048576 1048576 0 oomguarpages 81672 83305 1048576 1048576 0 numtcpsock 6 8 360 360 0 numflock 175 188 512 512 8 numpty 1 9 16 16 0 numsiginfo 0 48 256 256 0 tcpsndbuf 104640 263912 1720320 2703360 0 tcprcvbuf 98304 131072 1720320 2703360 0 othersockbuf 32368 89304 1126080 2097152 0 dgramrcvbuf 0 2312 262144 262144 0 numothersock 19 28 360 360 0 dcachesize 2285052 3624426 3409920 3624960 0 numfile 616 870 9312 9312 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 24 24 128 128 0

    Read the article

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • Can I nohup/screen an already-started process?

    - by ojrac
    I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen. Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )

    Read the article

  • How to install and Configure MTA on Linux [closed]

    - by Umair Mustafa
    I need to know which MTA's is better and simple to handle and configure in linux. As I need to run a script that will send me the output of that command whenever it will run using cron. Ok the case is this. Every day I have to manually check the Disk space of server which are more than 30 which is headache and have to document that. So I will simply add the follwing command DF- H and the output of this command should be send on my email. So now IF u got the story then tell me what MTA is better sendmail, postfix and some instructions on HOW TO INSTALL and CONFIGURE it. And after configuring the How do I add the DF -H so that it will start seniding me the output on my email. Thanks in advance.

    Read the article

  • X Session from Mac

    - by tekknolagi
    How can I log into an X server from Mac OS X? I know that ssh -X username@host will log me in and I will have the capability to run X applications. On Cygwin/X you can log in and have a whole X session from your computer... and it will look something like this: How can I replicate this? Using this batch script: @echo off SET DISPLAY=127.0.0.1:0.0 SET REMOTE_HOST=%1 IF "%REMOTE_HOST%" == "" SET REMOTE_HOST=10.0.0.1 SET CYGWIN_ROOT=\cygwin SET RUN=%CYGWIN_ROOT%\bin\run -p /usr/bin SET PATH=.;%CYGWIN_ROOT%\bin;%PATH% SET XAPPLRESDIR= SET XCMSDB= SET XKEYSYMDB= SET XNLSPATH= if not exist %CYGWIN_ROOT%\tmp\.X11-unix\X0 goto CLEANUP-FINISH attrib -s %CYGWIN_ROOT%\tmp\.X11-unix\X0 del %CYGWIN_ROOT%\tmp\.X11-unix\X0 :CLEANUP-FINISH if exist %CYGWIN_ROOT%\tmp\.X11-unix rmdir %CYGWIN_ROOT%\tmp\.X11-unix if "%OS%" == "Windows_NT" goto OS_NT echo startxdmcp.bat - Starting on Windows 95/98/Me goto STARTUP :OS_NT REM Windows NT/2000/XP echo startxdmcp.bat - Starting on Windows NT/2000/XP :STARTUP %RUN% XWin -query tekknolagi.dyndns.org -clipboard -lesspointer -scrollbars -screen 0 1050x1655@2 -screen 1 1680x985@1

    Read the article

  • Plesk hosting on MediaTemple DV

    - by David
    Hi there, We have a MediaTemple dedicated virtual running Plesk. The problem we're having is that changing the permissions of files on the server to be writable by server owner (apache) is conflicting with the ability to upload and overwrite files via the FTP user. Here's an example, I upload a file from user "serverftp" and they own the new file in the httpdocs folder. I then change the permission of an image upload folder to the apache user to that I can upload images via a PHP script. Uploading or changing that folder with the serverftp user is then locked out. Speaking to tech support didn't get very far because there are some strange group permissions going on and it would involve me adding every single domain FTP user to the pcantl group or something similar. I'm wondering how I can easily change things so that I don't have this problem anymore.

    Read the article

  • Does an SMTP request contain host header information (or just the IP of the targeted SMTP server)?

    - by Olaf
    We are using an external commercial smtp server for our newsletters (sending them through .NET components), and they offer two smtp URLs - smtp.critsend.com and fast.critsend.com -, and the second one is reserved for sending singular emails, the first one for bulk. Using nslookup shows that both resolve to the same 4 IP addresses (fast.critsend.com being an Alias). Question: (how) is it possible for the smtp relay to distinguish between different names? Is there something in the headers that can be compared to host headers in http protocol (I didn't find any intelligible information for a non-sysadmins)? The reason I'm asking is because we would like to use one of the IPs in our newsletter script (which works) rather than a name (in order to save DNS requests), and we are wondering about potential problems.

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • HALEVT troubleshooting: VFAT usb storage device gets mounted with root:root user:group

    - by Nova deViator
    Hi, i'm banging my head for number of days around this problem. using Halevt for automounting, everything mostly works, but the only thing is that Halevt mounts external USB storage devices as root. So, as user i cannot write to files on them. Halevt gets run as halevt user on boot through /etc/init.d script. This is Ubuntu Lucid with Awesome WM. No GDM. Running halevt as user seem to not work (halevt runs but doesn't respond on Insert) I know HAL is deprecated and removed and i should probably write my own UDEV rules, but until then it seems there must a be simple hack that enables mounting VFAT/NTFS devices with specific uid/gid. this question/answer helps a lot, but not specifically to the above.

    Read the article

  • Export and import a PostgreSQL database with a different name?

    - by J. Pablo Fernández
    Is there a way to export a PostgreSQL database and later import it with another name? I'm using PostgreSQL with Rails and I often export the data from production, where the database is called blah_production and import it on development or staging with names blah_development and blah_staging. On MySQL this is trivial as the export doesn't have the database anywhere (except a comment maybe), but on PostgreSQL it seems to be impossible. Is it impossible? I've seen out there some people using sed scripts to modify the dump. I'd like to avoid that solution but if there are no alternative I'll take it. Has anybody wrote a script to alter the dump's database name ensure no data is ever altered?

    Read the article

  • ArchBeat Link-o-Rama for December 14, 2012

    - by Bob Rhubart
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes | John-Brown Evans John Brown Evans' post continues the series of JMS articles that demonstrate how to use JMS queues in a SOA context. "This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite," John explains. And if you missed the first 5 steps, don't worry – the post includes links. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Would you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Using Oracle Enterprise Manager Cloud Control 12c with Filer Snapshotting | Porus Homi Havewala This concise technical article includes a script for database backup using snapshots and cataloging in RMAN. Thought for the Day "A program which perfectly meets a lousy specification is a lousy program." — Cem Kaner Source: SoftwareQuotes.com

    Read the article

  • How can i update Preview.app from the command line without loosing focus on OSX ?

    - by snies
    Hello, i want to update Preview.app in the background from the command line without loosing focus of my current window. I know that i can use the following to open/update the view of a file, but than i loose focus to the Preview.app. open -a Preview foo.pdf I guess there might be some clever Apple Script commands to do so but so far i didn't find the right one. Alternatively i would be interested into transfering the focus back to my current app directly after the update. I need this in order to update Preview.app's view of a pdf through a vi autocmd after i update the pdf according to changes in a tex file i am editing. Here is an example of what i want to achive but using Ubuntu and evince.

    Read the article

  • What ports tend to be unfiltered by boneheaded firewalls?

    - by Reid
    Hi all, I like to be able to ssh into my server (shocking, I know). The problem comes when I'm traveling, where I face a variety of firewalls in hotels and other institutions, having a variety of configurations, sometimes quite boneheaded. I'd like to set up an sshd listening on a port that has a high probability of getting through this mess. Any suggestions? The sshd currently listens on a nonstandard (but < 1024) port to avoid script kiddies knocking on the door. This port is frequently blocked, as is the other nonstandard port where my IMAP server lives. I have services running on ports 25 and 80 but anything else is fair game. I was thinking 443 perhaps. Much appreciated! Reid

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • pptpd not working externally on Ubuntu Server 11.10

    - by Brendan
    I am trying to set up a pptpd vpn on our newly installed Ubuntu 11.10 64 bit server, but am not having success having a client connect via an iPhone to the VPN. Note that no clients have been able to connect to this VPN from outside of the network. The system is up to date with patches. Here is the output of /var/log/syslog. Please note that 222.153.x.y is my remote IP address. Mar 30 22:07:47 server pptpd[9546]: CTRL: Client 222.153.x.y control connection started Mar 30 22:07:47 server pptpd[9546]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:07:47 server pppd[9555]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:07:47 server pppd[9555]: pppd 2.4.5 started by root, uid 0 Mar 30 22:07:47 server pppd[9555]: Using interface ppp0 Mar 30 22:07:47 server pppd[9555]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:07:47 server pptpd[9546]: GRE: Bad checksum from pppd. Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests Mar 30 22:08:17 server pppd[9555]: Connection terminated. Mar 30 22:08:17 server pppd[9555]: Modem hangup Mar 30 22:08:17 server pppd[9555]: Exit. Mar 30 22:08:17 server pptpd[9546]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Mar 30 22:08:17 server pptpd[9546]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Mar 30 22:08:17 server pptpd[9546]: CTRL: Reaping child PPP[9555] Mar 30 22:08:17 server pptpd[9546]: CTRL: Client 222.153.x.y control connection finished As you can see, the problem seems to be the connection timing out after 30 seconds ("Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests". Over Wifi however (inside the local network) there are no issues: Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Client 192.168.0.100 control connection started Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:12:33 unreal-server pppd[12407]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:12:33 unreal-server pppd[12407]: pppd 2.4.5 started by root, uid 0 Mar 30 22:12:33 unreal-server pppd[12407]: Using interface ppp0 Mar 30 22:12:33 unreal-server pppd[12407]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:12:33 unreal-server pptpd[12406]: GRE: Bad checksum from pppd. Mar 30 22:12:36 unreal-server pppd[12407]: peer from calling number 192.168.0.100 authorized Mar 30 22:12:36 unreal-server pppd[12407]: MPPE 128-bit stateless compression enabled Mar 30 22:12:36 unreal-server pppd[12407]: Cannot determine ethernet address for proxy ARP Mar 30 22:12:36 unreal-server pppd[12407]: local IP address 192.168.0.10 Mar 30 22:12:36 unreal-server pppd[12407]: remote IP address 192.168.1.1 I have set up an iptables config for the server; to check this isn't the problem I allowed all traffic temporarily, but this does NOT change the symptoms in the first example. Here is the output from /etc/iptables.rules.save *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT Even with these rules applied, the output from /var/log/syslog is LINE FOR LINE what I saw in the the first block of code. Please note that before running this Ubuntu server; an old SME Server box was running in place of it, that had a pptpd server on it just like we are using, and we experienced no issues.

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • Install full version of Unity on Chrome OS using crouton

    - by Sam Kong
    I have an Acer Chromebook. Using crouton, I installed Ubuntu (unity) on it. I am pretty familiar Ubuntu 12.04. But the installed one is very minimized package. My fonts are missing and I manually installed language pack for Korean but still browser can't display Korean characters. Is there a way to install the whole packages via crouton like when you install Ubuntu 12.04 with the CD? Or is there a script that installs the missing packages on the bare Ubuntu? Thanks. Sam

    Read the article

  • Sendmail encrypted

    - by user1948828
    I manage a website running on Apache. It has public and private areas. When people apply for an account to access the protected portions of the site, they do a TLS/SSL protected POST containing their information which is saved to a (hopefully) nonpublic directory on the server. Then I have a python script which takes URL Encoded POSTS with this user information, sends back a plaintext confirmation to the applicant, encrypts their information with a freeware java command-line utility to protect it (specifically this one: http://spi.dod.mil/ewizard.htm), base64 encodes them, puts them in a file as a mime attachment and uses sendmail to forward the file information to my (and several coworkers' scattered around the country) email account(s) on an Exchange server with Outlook clients. This has worked well for years, but is awkward because it involves manually decrypting the information on a windows box once it is received, using the above mentioned encryption utility. This significantly limits how many can be processed. I would like to be able to encrypt my information in a format that Outlook/Exchange can inherently understand and display so that these emails can be viewed simply by clicking on them. I do have company provided PKI public certs for all the people I need to send to, and am able to send/receive encrypted emails on Outlook manually, but would like to know how I can send to Outlook from apache/linux/python from the command line using the same PKI certs. Dont need to receive them, just send. Is there a utility that can do this? I had thought pgp might but I havent been able to figure it out.

    Read the article

  • Apache 2.4 and PHP 5.4 getting connection reset errors in the browser

    - by zuallauz
    In the weekend I upgraded my development web server to Apache 2.4 and PHP 5.4. In my web application which was previously working great on Apache 2.2 and PHP 5.3 it now starts getting these messages saying the "connection was reset" in Firefox. See screenshot. I am connecting to the linux machine via local LAN. I'm assuming it might be something to do with the new version of Apache or PHP, or the new LAMP stack which I downloaded from BitNami? It would seem to happen every 5-10 requests and throw this error, perhaps more likely to trigger it is if I send a POST request from a page. Is it timing out the script or something? These are just basic dynamic pages I'm loading and they worked perfectly in Apache 2.2 and PHP5.3. Here are my httpd.conf and PHP.ini if that has any clues. Any ideas? Any help much appreciated.

    Read the article

  • Directory Not Found Error

    - by noobguy
    I am trying to verify tails and when I get to the command prompt portion of the verification some difficulties seem to have arose. Below is the script: noob@noob-System-Product-Name:~$ cd [/media/noob/UUI] bash: cd: [/media/noob/UUI]: No such file or directory noob@noob-System-Product-Name:~$ gpg --keyid-format long --import tails-signing.key gpg: can't open `tails-signing.key': No such file or directory gpg: Total number processed: 0 Same thing happens when I try from download directory; noob@noob-System-Product-Name:~$ cd [/home/noob/Downloads] bash: cd: [/home/noob/Downloads]: No such file or directory noob@noob-System-Product-Name:~$ gpg --keyid-format long --import tails-signing.key gpg: can't open `tails-signing.key': No such file or directory gpg: Total number processed: 0 Any suggestions would be greatly appreciated.

    Read the article

  • Daemons die with bus error when their binaries live on NFS

    - by mbac32768
    We have some daemons executing on a number of hosts. The daemon executable images are these very large binaries that are hosted on NFS. When the binaries are updated on the NFS server, the previously running daemons sometimes drop dead with a Bus error. I'm assuming what's happening is the NFS server is replacing the binaries in a way that's invisible to the VFS layer on the NFS clients so they end up loading pages from the updated binary, which of course leads to madness. We tried moving the new binaries into place instead of cp, but that doesn't seem to fix it. I'm considering simply mlock()'ing the binary in the daemon startup script, but surely there's magic NFS options or semantics that we should be abusing. Is there a better way to fix this?

    Read the article

  • What do I need to know to design a language and write a interpreter for it?

    - by alFReD NSH
    I know this question has been asked and even there are thousands of books and articles about it. But the problem is that there are too many, and I don't know are they good enough, I have to design a language and write a interpreter for it. The base language is javascript (using nodejs) but it's ok if the compiler was written in another language that I can use from node. I had done a research about compiler compilers in JS, there is jison (Bison implementaion in JS), waxeye, peg.js. I decided to give jison a try, due to the popularity and its being used by coffee script, so it should be able to cover my language too. The grammar definition syntax is similar to bison. But when I tried read the bison manual it seemed very hard to understand for me. And I think it's because I don't know a lot of things about what I'm doing. Like I don't what is formal language theory. I am experienced in Javascript (I'm more talented in JS than most average programmers). And also know basic C and C++ (not much experience but can write a working code for basic things). I haven't had any formal education, so I may not be familiar with some software engineering and computer science principles. Though everyday I try to grasp a lot of articles and improve. So I'm asking if you know any good book or article that can help me. Please also write why the resource you're suggesting is good. --update-- The language I'm trying to create, is not really complicated. All it has is expressions (with or without units), comparisons and logical operators. There are no functions, loops, ... The goal is to create a language that non-programmers can easily learn. And to write customized validations and calculations.

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >