Search Results

Search found 37654 results on 1507 pages for 'function prototypes'.

Page 1270/1507 | < Previous Page | 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277  | Next Page >

  • Routing and authenticating all access through squid

    - by Knight Samar
    Hi, I want to route all Internet access in my network through a Squid proxy server and authenticate and log all users. I want this to be a client-independent setting so that no one needs to do anything on their browsers or machines. I have set my network gateway as the proxy server so that all traffic will be sent to it. I have done this using options in DHCP server. Now I tried using squid as a transparent proxy, but then it won't authenticate in that mode. I tried using iptables to route all traffic to port 3128 but it won't popup the authentication dialog box from SQUID. I tried telling DHCP to give WPAD to all clients by placing a WPAD file on a webserver containing the following for automatic proxy configuration on clients: Changes in dhcpd.conf option wpad code 252 =test; option wpad "\n\000"; option wpad "http://192.168.1.5/wpad.dat\n"; The WPAD file: function FindProxyForURL(url,host) { return "PROXY squid-server-ip-address:3128 ; DIRECT "; } But the browsers (different versions of Firefox and IE) seem to ignore it. :( What should I do ?

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Apache displays error page half way through PHP page execution

    - by Shep
    I've just installed Zend Server Community Edition on a Windows Server 2003 box, however there's a bit of a problem with the display of a lot of our PHP pages. The code has previously running under the same version of PHP (5.3) on IIS without any issues. By the looks of things, Apache (installed as part of Zend Server) is erroring out during the rendering of the page when it comes across something it doesn't like in the PHP. Going through the code, I've been able to get past some of the problems by removing the error suppression operator (@) from function calls and by changing the format of some includes. However, I can't do this for the whole site! Weirdly, the error code is reported as "200 OK". The code snippet below shows how the Apache error HTML interrupts the regular HTML of the page. <p>Ma quande lingues coalesce, li grammatica del resultant lingue es plu simplic e regulari quam ti del coalescent lingues.</<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>200 OK</title> </head><body> <h1>OK</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log The Apache error log doesn't offer any explanation for this, and I've exhausted my Googling skills, so any help would be greatly appreciated. Thanks.

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • Disable touch pad for mouse button region on new HP pavillion models?

    - by John
    i bought a new hp pavillion dv6 series laptop. the laptop itself is fine but it has the new hp touchpad mouse which i absolutely hate. its such a stupid problem to have with a computer. the left and right mouse buttons are, themselves, part of of the touchpad, meaning that if i tap the buttons without actually pressing them down, it registers the same way as the mouse pad (the cursor moves, tap to click activates, etc.) this is a major annoyance because it prevents you from operating the mouse pad with anything more than a single finger; if for example i use my right hand index finger to move the cursor using the touch pad and rest my left hand index finger on the left mouse button for more efficient mouse-ing, the mouse will react as if im trying to use 2 fingers to move it and it will either just sit there or will spaz out. the only way this works is if i keep the finger that is resting on the mouse button absolutely still, which is very difficult and, therefore, very annoying. also, even if i do abide by the arbitrary new decree of single-finger mousepad operation, i still have a problem because when i press down on the left or right click buttons, the mouse moves slightly, what with the buttons also being part of the touch pad and all. this would not be that hard to avoid except that they decided to also make the buttons much harder to press down. now whenever i go to click something, i press hard on the mouse button, causing my finger to slightly move or roll or flatten out a bit, causing the cursor to move slightly, and causing me to click on something different. what i would like to know is if there is anyway that i can disable the touch pad on the buttons. i have gone through all of the settings under the synaptics menus but i cannot find anything about this. did i miss something in one of the menus? if not, then are there any updated drivers that allow for toggling of this function?

    Read the article

  • Utilities for finding x/y screen coordinates

    - by slothbear
    I have an application that needs the user to enter the x and y location of a couple of items on the screen. [Yes, this is crap and I'll replace it, but for now...] On Mac OS X, I'm using Snapz Pro X. When I choose to take a snap of a selection, the interface displays the mouse cursor location. This is OK for my personal use, but I can't ask users to buy a $69 program for this function. I thought I had a solution with the built-in program "Grab", but it reports the coordinates from the bottom left, and I need it from the top left. I don't have access to Windows; not sure what to use there. I've not even been able to come up with good search terms since mouse and coordinates and x/y are so common. Ideas are welcome there. Extra credit: same x/y finder for Linux. A Java program would be OK too, since the main application is in Java.

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Games consoles won't connect through the TP-Link TL-WA500G Access Point

    - by Manfred Wolff
    I hope that someone can help me. I have several Laptops and other devices, all using my Wireless Router (Sky Digital Netgear) To extend the range to the back of the house, I purchased a TP-Link TL-WA500G Range extender. configured just as a pure repeater, it picks up the signal from the Netgear Router. The Netgear Router does the DHCP, handing out the IP addresses. This all works a treat with several different laptops and my iPone4S, but when my son tries to use his XBox360, Sony Playstation3 or the Nintendo Wii those devices fail to acquire an IP address. They just sit their waiting for the IP config. This also happens with my wife's HTC desire ONE Android phone. My son says that, when his HTC Desire C won't get an IP address, he just unplugs the AP briefly - the phone will connect and he puts the AP back on. Once he is connected to the Router, the AP won't disturb function. The Games Consoles don't seem to work like that. They stop working, when the AP is reconnected. I had my son try to configure permanent IP addresses, and he said that did not work either, though I have to confirm that, as I did not see that for myself. Has anybody seen this before? I have searched the Net and have not found any similar problems anywhere. I wonder if there is setting somewhere that would fix this. Many thanks for anyone reading this and trying to help. M

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • Merge several mp4 video using ffmpeg and php [on hold]

    - by rihab
    I would like to merge several videos using ffmpeg and php. I want to retrieve the names of the videos dynamically but I can't merge all the videos together I only get i-1 merged videos This is the code I use: <?php $checkBox = $_POST['language']; $output=rand(); function conv($checkBox){ $tab=array(); for($i=0; $i<sizeof($checkBox); $i++) { $intermediate=rand(); $tab[$i]=$intermediate; exec("C:\\ffmpeg\\bin\\ffmpeg -i C:\\wamp\\www\\video_qnb\\model\\input\\$checkBox[$i].mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts C:\\wamp\\www\\video_qnb\\model\\output\\$intermediate.ts"); } return $tab; } $t=conv($checkBox); for($i=0;$i<sizeof($t); $i++) { if($i!=0) { if(sizeof($t)<=2) { exec('C:\\ffmpeg\\bin\\ffmpeg -i "concat:C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i-1].'.ts|C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i].'.ts" -c copy -bsf:a aac_adtstoasc C:\\wamp\\www\\video_qnb\\model\\output\\'.$output.'.mp4'); } else { exec('C:\\ffmpeg\\bin\\ffmpeg -i "concat:C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i-1].'.ts|C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i].'.ts" -c copy -bsf:a aac_adtstoasc C:\\wamp\\www\\video_qnb\\model\\output\\'.$output.'.mp4'); exec("C:\\ffmpeg\\bin\\ffmpeg -i C:\\wamp\\www\\video_qnb\\model\\output\\".$output.".mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts C:\\wamp\\www\\video_qnb\\model\\output\\i.ts"); exec('C:\\ffmpeg\\bin\\ffmpeg -i "concat:C:\\wamp\\www\\video_qnb\\model\\output\\i.ts|C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i+1].'.ts" -c copy -bsf:a aac_adtstoasc C:\\wamp\\www\\video_qnb\\model\\output\\final.mp4'); $i++; } } } ?> Can anyone help me??

    Read the article

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • Windows XP usb drivers reinstalling upon reboot

    - by iWerner
    We have a Windows XP SP3 laptop (Acer Travelmate 7320) to which we connect a variety of astronomy equipment (a telescope, its mount, some cameras and others) all of which connect through USB. When we plug in these devices, Windows tells us that it detects the hardware and installs the driver. All of these devices then function correctly using the software that came from the vendor (unfortunately, one of the vendors does not support Vista 64, and that is why we're using our XP laptop). However when we reboot the computer we experience a variety of symptoms: Windows reports that it found new hardware for some of the devices and tries to reinstall their drivers, and for some of the other devices needs to be unplugged and plugged in again before they are detected again by the operating system, in which case Windows still tries to reinstall their drivers. It is as if Windows does not remember that it has already installed the drivers. Is this a common problem on Windows XP? If so, what can be done about it? Should we rather be looking at the laptop's firmware and drivers? We've looked into updating the drivers for the chipset, but this did not solve the problem. Thank you in advance.

    Read the article

  • Linux Distro - GUI similar to Windows

    - by DeaconDesperado
    I am in the process of refurbing several older laptop machines for use by a couple college guys we have in training to learn basic web development in python. These are students who intern at my company and are hoping to do some work when the summer comes building simple client-oriented webapps (learning the basics of OOP, MVC webapp design in flask, etc.). We're trying to function as the "practical" side of their education. I would like to get them set up on these machines we have sitting about, but I'd like to use a linux distro that would have a gui that closely approximates what they are being compelled to use at school (windows.) I don't really have much of a preference as far as GUI goes since much of what we'll be learning together is accomplished on the command line. I just see this as an easier adjustment for them while they are still reliant on a graphical environment. In the past I'd go straight for Ubuntu, but since they started using the Unity GUI the responsiveness overall can be pretty clunky on older machines, especially since these machines (there are four of them) run the gambit on specs (though all are at least 1.0Ghz and none have anything better than basic integrated video.) Has anyone had to setup a similar working environment in Mint, bare Debian or Zorin? Thanks.

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • Error at the end of APC install

    - by cinqoTimo
    I need to get APC running for a Drupal install of mine. I found a fairly concise guide at http://blog.4rev.net/2009-09/installing-apc-accelerator-into-php5-fedora-core-11/ for installing on FC11, only, I am using FC12. I figured I would give it a shot. I was able to run the following commands successfully - and yum installed fc12 versions of everything in the FC11 guide. yum install php-pear yum install php-devel httpd-devel yum groupinstall ‘Development Tools’ yum groupinstall ‘Development Libraries’ Then, I tried pecl install apc. Everything looked good until to got to the end, where it outputted the following error. /var/tmp/APC/php_apc.c: In function ‘zif_apc_compile_file’: /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_class_table’ /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_function_table’ /var/tmp/APC/php_apc.c: At top level: /var/tmp/APC/php_apc.c:959: error: duplicate ‘static’ make: *** [php_apc.lo] Error 1 ERROR: `make' failed Some people have had success with installing apc-beta, but that didn't work for me.. Any suggestions? Is there something I missed that is critical in the FC12 version?

    Read the article

  • Problems with USB-Devices using VDR

    - by emmsinator
    Hey Guys, I'm using VDR on vSphere4. It works sucessfully. I've already backuped several VMs with VDR and I like it very much. But now we got a problem. We have 2 VMs, using an USB-Device Server with a stick plugged in, which is definetely need by these 2 VMs for Licensing and so. Every time, I start the Backup process, the VMs lost the communication to the USB-Server and its stick after building the snapshot and while online. Because of that, the software on these VMs can't work correctly. I have to restart both Machines to solve this problem. These fact is bad for an automatic backup. Does VDR have a special function for those cases or is something like this already known? It would be no problem, to shutdown the servers for building snapshots on Saturday or Sunday. Can VDR initiate a shutdown before starting the backup process? Otherwise I must try to use scripts, but that wouldn't be so nice. Thanks a lot for your help.

    Read the article

  • Gmail: Received an email intended for another person. [closed]

    - by jonescb
    I'm not really sure how email routing works, but someone ordered something on Amazon and I received the email instead of them. Or maybe we both got it, I don't know. The order doesn't show up in my account, so I'm certain I wasn't charged for it, but I shouldn't be getting other peoples' emails. We'll say that my email is [email protected], and somebody who's email is [email protected] places an order on Amazon. The confirmation email is sent to me at [email protected]. I checked the email header, and it did say To: [email protected] which is not my email address. At first I thought that Google ignores periods in email addresses, but I tested the account setup and it doesn't give any error when you put a period in the address. I didn't create the account; I just used the "check availability" function and the address I chose with a period was fine. Maybe someone with knowledge about how Email works could tell me why this happened. Is this a bug in the way Amazon sends emails? Or is it a bug in how Google receives them? Who should I report this issue to?

    Read the article

  • Write stderror to a file using PowerShell

    - by Zian Choy
    How do I capture error messages from a PowerShell-launched command in a text file? I searched the Internet for a while and found that supposedly, I should be able to do something like cmd /c "big blob of text >C:\output.txt 2>c:\errors.txt" to direct the output to output.txt and the errors to errors.txt but when I try to run the command, I get the following error: cmd.exe : The filename, directory name, or volume label syntax is incorrect. At C:\Users\Zian\Desktop\Untitled1.ps1:27 char:4 + cmd <<<< /c $command + CategoryInfo : NotSpecified: (The filename, d...x is incorrect.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Furthermore, if I try to run the command without everything starting at "2", then the command executes correctly and output.txt catches the right output. I looked at Redirect stderr to variable in powershell but it wasn't helpful because the answer to that question suggests capturing the entire output and filtering it in memory. In my case, I am backing up every database on a computer and since the databases won't fit in my laptop's RAM, I cannot use the question's solution. I also found tantalizing suggestions about using $err = @(command goes here) but with no information on what to do other than simply inserting that line of text. I tried to utilize the search function on Serverfault with the string "@()", but it did not return any results. What can I do to get the error messages into errors.txt?

    Read the article

  • What is the bash syntax to create a new directory in the directory above?

    - by mozerella
    I aim to make a script for mogrify. The mogrify command will resize images in a directory and put the resized images into a directory on the same directory level, with the same name as the work directory, but with a suffix (_a). The new directory will be moved to another collection later on. Something like this, #!/bin/bash mkdir ../n_a for file in *{.JPG|.jpg}; do mogrify -path ../n_a -resize 1200x1200 -quality 96;done I'm guessing ../ denotes the parent dir when working in a child directory, but I need help here. Edit: "n" needs to be replaced with the syntax for the working directory name. Sorry there was a typo as well third script line, should have read n not x Edit2: This script does exactly what I need and it's silent. #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p $DEST mogrify -path $DEST -resize 1200x1200 -quality 96 *.jpg *.JPG thanks to vgoff for the correct PWD syntax and cesareriva http://www.cesareriva.com/archives/722 for showing me the DEST function. Something else: ${PWD##*/}_a is not caring for spaces in the directory name and the script fails. An empty dir is created in the same dir as the images. Found it out now, it needs quotations on the $DEST too, presumably to help mkdir create the dir with a space in the name, and mogrify to write the files to the right place, like this #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p "$DEST" mogrify -path "$DEST" -resize 1200x1200 -quality 96 *.jpg *.JPG

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

< Previous Page | 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277  | Next Page >