Search Results

Search found 68249 results on 2730 pages for 'sudo work'.

Page 884/2730 | < Previous Page | 880 881 882 883 884 885 886 887 888 889 890 891  | Next Page >

  • Flash was "not designed to function across LANs". Any workarounds?

    - by Triynko
    See: http://helpx.adobe.com/flash/kb/problems-using-flash-authoring-across.html Issue When using Adobe Flash across a local area network (LAN) and networked drives/folders, you may experience any of the following problems:" Flash crashes while performing a test movie on FLA files located on a networked drive or folder. FLA files get corrupted when opening from or saving to networked drives or folder. Flash does not reflect changes in custom class after compiling. Flash, Flash Video Encoder, or Adobe Media Encodercrashes or corrupts Flash Video (FLV) files while encoding source located on networked drives or folder. Flash Video Encoder or Adobe Media Encoder crashes or corrupts FLV files where the output folder is a networked drive or folder. Published Flash Player (SWF) files and projectors are unable to load content located on networked drives or folder. More than one instance of a SWF or Projector on client machines cannot play back FLV files located on a networked drive or folder. Reason The Adobe Flash IDE, FLV Encoder, Adobe Media Encoderand Flash Player were not designed to function across LANs. Solution Use of Flash files across local networks is not supported in any context. Published content should access data through a web server. All file sources should be opened and saved on the local system. Using Flash in such a scenario for project collaboration or content deployment is highly discouraged and may corrupt your source files. If you need to work in a collaborative environment or store source files on a server, use the project panel and/or a third-party version control system. SERIOUSLY? I cannot work on files located on a mapped network drive? How did they mess that one up? Does the Flash IDE really open the source file and wipe it clean to do the saving, rather than saving a copy first then replacing it as an atomic file system operation? How hard would it be for them make a dummy temporary file for saving then issue a MOVE command? Any workarounds for this, like something that can make a network drive as stable as a local drive, like some kind of automatic local caching and synching?

    Read the article

  • Mac OS X: Finder view options?

    - by trolle3000
    Hi there. In OS X 10.6, Finder usually looks something like this: The Finder window looks like that when you double-click on most folders or drives. However, whenever I mount a Truecrypt Volume and double-click that, it looks like this: Is there any way to default to the first view option for all types of folders? I tried view options in Finder, but it didn't seem to work.

    Read the article

  • Can you change the name of a flash drive on the boot menu?

    - by Mark Kramer
    I have two Staples brand Flash Drives. They work fine and they boot okay, but they have the same name on the boot menu, so when I have them both in the computer, I can't tell which one is the one I want to boot into. One has Ubuntu on it, the other BackTrack 5. However, the name of those drives show up different on different BIOS. What parameters affect what name shows up for a boot device and how

    Read the article

  • What applications can be used in a Red Hat/CentOS cluster?

    - by Sandra
    Hi, When I look at the Red Hat cluster manuals 1 2, they only explain how to install it but not what applications can use it. I am new to clusters, so I don't know these things =) Let's say I want to 3 node high performance cluster; What applications would work with it? Also, how does an application talk to the cluster? Does the application need to have been written to support clusters? Sandra

    Read the article

  • Why is the vSphere console view so slow?

    - by blade
    Hi, Why is the Console view on the vSphere client so slow? It's a real shame because it's a shame to have to establish an RDP session every time you work on one of the VMs because of the speed of the console (I saw a tool to right click and open an RDP session to a VM in vSphere Client/ESX but this was not reliable). The Workstation console view is very smooth so I'd expect the vSphere Client console view to be very smooth. Thanks

    Read the article

  • How do I install Ubuntu on Windows 7 with BitLocker?

    - by Sorin Sbarnea
    I installed Ubuntu 10.4 using Wubi on a Windows 7 x64 on the first partion that is NTFS and it's using BitLocker and it does fail to load. Is it Wubi incompatible with BitLocker or there is a way to configure the system without removing BitLocker and to make it work? Currently when I try to load Ubuntu I get No wbildr error message.

    Read the article

  • Ssh, run a command on login, and then Stay Logged In?

    - by jonathan
    I tried this with expect, but it didn't work: it closed the connection at the end. Can we run a script via ssh which will log into remote machines, run a command, and not disconnect? So ssh in a machine, cd to such and such a directory, and then run a command, and stay logged in. -Jonathan (expect I used) #!/usr/bin/expect -f set password [lrange $argv 0 0] spawn ssh root@marlboro "cd /tmp; ls -altr | tail" expect "?assword:*" send -- "$password\r" send -- "\r" interact

    Read the article

  • Windows 7 remains powered on when restarting

    - by BombDefused
    I'm running windows 7 x64 on an MSI P67A-GD53 motherboard, in an Antec P280 Super Midi Towercase with a Corsair 650w PSU. I've just installed a second instance of windows 7 x64 on a separate disk (this is to keep my games separate from my work OS). The problem is that it appears now that I cannot restart from either instance of Windows 7. The shut down command, and sleep commands work as expected. When I try to restart, the shutdown happens but the system never reboots. Everything remains powered on, until I hold down the power button to force the power off. Ithink (but am not 100% sure) this has only started since I installed the second OS, and am assuming this has something to do with the motherboard needing to know which OS to run up again? Some other forums I've read suggest that the PSU has a major role in restart and could be at fault. Changing the boot order of the disks in the BIOS does not change anything. Any suggestions greatfully recieved! Update: I now have a reproduceable issue: I think the secondary OS install may have been a red herring. It was when windows tried to reboot during the install that I noticed the issue. After playing around with installing drivers, and rebooting many many times, I have found that it is the OC genie setting on the MSI motherboard that seems to trigger the problem. This makes sense as I only started using the OC genie feature a couple of weeks ago, and probably hadn't used restart in that time. However... simply turning off OC genie does not make the issue go away. I have to turn off OC genie, shutdown, start enter bios, go to the "Save and Exit" menu "Restore Defaults" yes to "Load optimized defaults", which will reset to clear the problem. Now when the PC boots into windows, I can restart as normal (and from the OS on either HDD). I only know how to control the issue, and don't still know the root cause. I'd like to be able to use the OC genie function if anyone can suggest a why I'm seeing this problem. Could it be that I'm drawing too much power when using OC feature?

    Read the article

  • nginx, php-fpm, and multiple roots - how to properly try_files?

    - by Carson C.
    I have a server context which is rooted in a login application. The login application handles, well, logins, and then returns a redirect to "/app" on the same server if a login is successful. The application is rooted elsewhere, which is handled by the location block shown here: location ^~ /app { alias /usr/share/nginx/www/website.com/content/public; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; include fastcgi_params; } } This works just fine, however the $uri getting passed to PHP still contains /app, even though I am using alias rather than root. Because of this, the try_files directive fails to a 404 unless I link app -> ./ in /usr/share/nginx/www/website.com/content/public. It's obviously silly to have that link in there, and if that link ever gets lost, bam dead website without an obvious cause. The next thing I tried... Was to remove the try_files directive entirely. This allowed me to rm the app link in my /public folder, and PHP had no problem locating the file and executing it. I used that to dump my $_SERVER global from PHP, and found that "SCRIPT_FILENAME" => "/usr/share/nginx/www/website.com/content/public/index.php" when the browser URI is /app. This is exactly right. Based on my fastcgi_params below, this led me to beleive that try_files $request_filename =404; should work, but no dice. nginx still doesn't find the file, and returns 404. So for right now, it will only work without any try_files directive. PHP finds the file, whereas try_files could not. I understand this may be a PHP security risk. Can anyone indicate how to move forward? The nginx logs don't contain anything relating to the failed try_files attempt, as far as I can see. fastcgi_aparams fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https;

    Read the article

  • Nvidia driver for Dell xps M1330 in XP

    - by Recursion
    I need to run xp, and cannot find a driver that works for this laptop in windows xp. I have tried quite a few of them, from my google searches, but none work, they all come up and say no proper driver can be found for your hardware. I have tried to replace the inf files on driver installs but I still get nothing, same error. Any ideas will be appreciated, thank you.

    Read the article

  • Hostname problem

    - by codeshepherd
    my hostname is newton ...when I set "127.0.0.1 Newton" in /etc/hosts .. parallels stops working.. when I set "127.0.0.1 localhost" in /etc/hosts apache installed via ports stops working.. when I add both '"127.0.0.1 localhost", and "127.0.0.1 newton" to hosts file.. parallels network doesnt work

    Read the article

  • after installing monit when i do monit status myproc i get "error connecting to the monit daemon"

    - by Jason
    after installing monit when i do monit status myproc i get "error connecting to the monit daemon" I read somewhere that The status command won't work in the case that monit is running indaemon mode without its http support - the command 'monit status' in such case tries to get the status from the daemon via http/tcp. To start the http interface you need to add the 'set httpd ...' statement to theconfiguration. is that still correct? that post was from 2005

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Command line tool for MediaWiki?

    - by Magnus
    Is there a command line tool that would allow me to script creation of accounts on a MediaWiki instance? The UI for creating an account is painful, and very time consuming when tasked with creating 10+ accounts at a time. Unfortunately I can't get ImportUsers to work due to the very old version of MediaWiki we use (and upgrading is unfortunately not possible at this time).

    Read the article

  • Displaying XML in Chrome Browser

    - by Josh
    I love the Chrome browser but I use XML quite a lot in my development work and when I view it in Chrome I just get the rendered text. I know that the source view is slightly better but I'd really like to see the layout and functionality that IE adds to XML namely: Highlighting Open/close nodes Any ideas how I can get this on Chrome? Thanks, Josh UPDATE: The XMLTree Extension is available on Google Chrome Extension Beta Site. Thanks again for your help.

    Read the article

  • Batch script to rename portionof a filename

    - by Rubik'sCube
    I've been trying to make a script that will a file name and change only one word in it. An example would be: projectname.vcproj.domainname.username.user to projectname.vcproj.otherdomainname.username.user I've tried using the if loop to list the directory and set the delimiter to a period but it doesn't seem to be able to identify and change it. I'm using examples of renaming .txt files but it doesn't seem to work, any suggestions?

    Read the article

  • Open Garmin GPI files in Linux

    - by zero
    i have several files that are in the Garmin GPI format, from here, and want to access them in Linux. How can I do that? hello dear renan - well i found the solution - so thanks for your try to add some sense to the question. GEREAT work - and here we have the answer: i will Install gpsbabel to convert the stuff. The gpi.files are just xml files, You can - after converting with gpsbabel read them as a human being.

    Read the article

  • IIS 6 and PHP on Windows Server 2003 R2 32-bit

    - by ELS
    I am trying to get IIS 6 to serve up PHP pages. I have followed: http://www.iisadmin.co.uk/?p=4&page=3 But now with PHP 3.2 I dont see PHPisapi.dll anyplace so it doesn't work. Does anyone know what I might be doing wrong? I downloaded the .zip for 5.3 Windows non-thread safe and manually put in at c:\PHP. I am stumped.

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • Macbook Pro with Windows 7 - GPU always on

    - by Joonas Pulakka
    Übergizmo is reporting an issue with the new Macbook Pros' GeForce 330M GPU being always "on" under Windows 7, and thus almost halving the battery life compared to that with OS X (which is able to somehow suspend that GPU and use the the low-end integrated GPU to do the light work). Any solutions, or rumors of coming solutions?

    Read the article

  • brctl Not working fine with bridging eth0 and at0

    - by Passi0n
    I made an access point with airbase-ng and its at0 I tried to bridge my eth0 and at0 by brctl addbr demo brctl addif demo eth0 brctl addif demo at0 brctl demo up dhclient3 demo & already removed eth0 ip so when i use ping 192.168.1.1 -I eth0 theres no reply but if i use ping 192.168.1.1 -I demo it works!!! In browser internet works fine so when i connect my android with at0 (access point) it should same work. but its now working at all :(

    Read the article

  • what reverse proxy server will direct traffic to healthy servers whose health is based on a result string

    - by joshua paul
    what reverse proxy server will direct traffic to healthy servers whose health is based on a result string?? ideally i'd like something like dnsmadeeasy or ultradns - lol - but for reverse proxy i have looked at pound, delegate, ha proxy, squid, varnish, nginx, apache, and cherokee but can't see that they will work - they only test for HTTP result code scenario client request www.aaa.com www.aaa.com is a reverse proxy reverse proxy looks at "test.php" on server 1.aaa.com, 2.aaa.com and 3.aaa.com for result string "OK" if the server is "OK" then proxy requests to them help!

    Read the article

  • How do I prevent infinite recursion in X11 start-up process?

    - by chrisaycock
    I wasn't able to run X11 or Terminal after rebooting my Mac. After digging around, I got them to work when I commented-out this line in my .cshrc: xset b off It appears that xset will attempt to launch X11 if it isn't running already, and since X11 will launch the default shell through xterm and thus encounter the xset line above, we will have an infinite loop. I would like to keep the above line in my .cshrc. Is there a way to prevent X11 from launching itself?

    Read the article

< Previous Page | 880 881 882 883 884 885 886 887 888 889 890 891  | Next Page >