Search Results

Search found 24915 results on 997 pages for 'ordered test'.

Page 789/997 | < Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • How to host an ssh server?

    - by balki
    Hi, I have a broadband internet connection. I have an wireless modem (Airtel India). I don't have a static ip address. I want to host a ssh/web/ftp server to be visible to the outside world just for testing and learning purpose so I can ask my friend to connect to my current ip address and test. My modem has an admin interface which allows to port forward and open ports. I set up ssh server as shown and checked if port 22 is open using this website , Port Scan And port 22 is open. I have an openssh server running and it works if i do, ssh [email protected] which is my local ip address but doesn't work if i do ssh [email protected] where 122.xx.xx.xx is my external ip address of my modem which i checked from whatismyipaddress.com. Since it looks like the port is open, I wonder if there is some setting I need to change in my server config to expose my server. How should I go about solving this?

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Short, intermittent USB port timeouts

    - by jacobsee
    I have a data acquisition application controlled by a Windows PC. I am using an Intel Desktop Board DH67CL motherboard. This has 6 USB2 ports along with 2 of the new blue USB3 ports. The DAQ instrument connects via USB to the computer. Roughly once every day or two there is a short communication glitch with the USB that causes the instrument to disconnect briefly and then reconnect. This is logged so I know whenever it happens. Occasionally it will cause the data acquisition to stop. I've verified that the glitches will go away if I use the USB3 ports, or if I use a PCI add-on card with USB ports. So it seems that there is something going on with these built-in USB2 ports on this motherboard. I haven't yet had a chance to test with other motherboards. My question: what could be causing these glitches and how to get rid of them for this board, or is there a better motherboard to use? We have standardized on the DH67CL because it's inexpensive, has 3 PCI slots that we need, and is readily available. We don't need the power of higher-end server boards but reliability is important. Thanks. Update: This problem has been reproduced many times on different hardware, though we always use the same model of motherboard (DH67CL) and power supply (Antec EA-430D). I don't think the power requirements are very high but will check on that.

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • Why am I not able to create a backup plan for TFS?

    - by noocyte
    I am trying to create a backup plan using the TFS Power Tools but I keep running into this error message: I have checked that the account has Full Control on the share, I can edit, create and delete files there. From the log: [Info @07:15:00.403] Starting creating backup test validation [Error @07:15:00.700] Microsoft.SqlServer.Management.Smo.FailedOperationException: Backup failed for Server 'WMSI003714N\SqlExpress'. ---> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. ---> System.Data.SqlClient.SqlException: Cannot open backup device '\\wmsi003714n\sql dump\Tfs_Configuration_20100910091500.bak'. Operating system error 5(failed to retrieve text for this error. Reason: 1815). BACKUP DATABASE is terminating abnormally. at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(StringCollection sqlCommands, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Smo.ExecutionManager.ExecuteNonQuery(StringCollection queries) at Microsoft.SqlServer.Management.Smo.BackupRestoreBase.ExecuteSql(Server server, StringCollection queries) at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) at Microsoft.TeamFoundation.PowerTools.Admin.Helpers.BackupFactory.TestBackupCreation(String path) [Error @07:15:00.731] !Verify Error!: Account GROUPINFRA\SA-NO-TeamService failed to create backups using path \\wmsi003714n\sql dump [Info @07:15:00.731] "Verify: Grant Backup Plan Permissions\Root\VerifyDummyBackupCreation(VerifyTestBackupCreatedSuccessfully): Exiting Verification with state Completed and result Error" Any ideas?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

  • No sound Ubuntu 10.10 (have SB Live! card)

    - by Chris Frazier
    I am new to Linux/Ubuntu and I just installed 10.10 on my Dell Desktop PC that was running Windows XP. The PC is about 5-6 years old and the sound card it has is a SoundBlaster Live! card. The Sound Preferences recognizes the card as [SB Live! Value] EMU10k1X. It is currently set to Analog Stereo Duplex. I tried multiple other sound configurations and nothing works. No sound but some clicks come out of the speakers. I ran the system test and during the audio tests the same thing happened, just clicks and pops. I tried to play some music with Rhythmbox and when it tries to play a track it tries for several seconds and then just closes itself, or sometimes just doesn't play. I ran a check for drivers, but the only driver it found to install was for my nVidia video card. It did not show any drivers for the sound card. Does anyone have any idea what I need to do to get the sound to work?

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • How to access XAMPP virtual hosts from iPad on local network?

    - by martin's
    Using XAMPP on one machine. Multiple virtual hosts defined. One per project. Format is .local For example: apple.local microsoft.local client-site.local our-own-internal-site.local All works perfectly from that one machine. I now want to have other systems within the network access the various sites. The main reason for wanting to do this is to be able to test site functionality and layout from mobile devices without having to upload partial work to public servers. I can access the main XAMPP default site by simply entering the IP address of the XAMPP machine in, say, Safari on an iPad. However, there is no way to reach .local that I can see. Would this entail setting up a DNS server within the network? We have a mixture of Windows and Mac machines. No Linux. The XAMPP machine is Vista 64. I don't want real external internet access to be affected in any way, just ".local" pointed to the XAMPP machine if that makes sense.

    Read the article

  • What are the IR codes the new Apple Remote (alu) uses?

    - by index
    I would like to clone the new Apple Remote (infrared, second generation, aluminium) just for fun with a microcontroller. Most codes of the previous model can be found in the LIRC remote control database (all except the key combinations menu + <<,play, which unpair, change ID, pair the remote. I also don't know which bit encodes the battery status. It uses a modified 32 bit NEC protocol (reverse LIRC codes bytewise). But the new Apple remote uses two additional codes for the play and the new select button. I don't have a mac, so I can't brute force test codes either ;-) So if someone possesses such a remote and the ability of recording those two new buttons and three combinations I'd really appreciate it. If you can't run LIRC (or it gets confused by the new codes) and you don't have an oscilloscope or logic analyser, maybe you could hook up a photo diode to your sound input and record the codes with Audacity? Just hit record, hit each button and combo a few times, hit stop, upload the uncompressed WAV file to a sharing site, done. That'd be great!

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • 403 Forbidden serving static files from VirtualBox shared folder with nginx (Ubuntu 10.04LTS guest, Windows 7 host)

    - by Chris Pratt
    I'm working on a local development VM and trying to test serving my site with gunicorn and nginx as a reverse proxy for static resources only. The site loads minus static resources with user nginx; in nginx.conf. Attempting to load a static resource individually reveals a 403 Forbidden error. For background. The static resources are in a shared folder under /media/sf_work. All files are owned by root:vboxsf (VirtualBox default). My user account on the system has been added to the vboxsf group, and I have full access to the shared folder. For comparison, I tried changing the nginx.conf user to my user account. In that scenario, the static files did load, but then the homepage itself gives a 403 Forbidden error. So, I then tried adding the nginx user to the vboxsf group, but then everything gives a 403 Forbidden error. After further investigation it seems that if the nginx.conf user is in any group, it results in a 403 Forbidden. Any idea what could possibly be going on here?

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • How to I configure open_basedir parameter under my Centos VPS?

    - by deltanovember
    The parameter can be seen here http://wordswithfriends.net/test.php open_basedir /var/www/vhosts/wor.wordswithfriends.net/wordswithfriends.net/:/tmp I'm trying to add PHP pear directories /var/www/vhosts/wor.wordswithfriends.net/conf is as follows -rw-r----- 1 root apache 6461 Jan 25 08:56 12959674170.16899500_httpd.include -rw-r----- 1 root apache 6461 Jan 31 06:52 12960111810.31860800_httpd.include -rw-r----- 1 root apache 6532 Jan 31 06:55 12964785250.54523600_httpd.include -rw-r----- 1 root apache 6532 Jan 31 07:01 12964788880.47252600_httpd.include -rw-r----- 1 root apache 6532 Jan 31 15:54 12965108850.92819600_httpd.include -rw-r----- 1 root apache 6652 Jan 31 21:32 12965206700.32285200_httpd.include Currently configured as follows grep base 12965206700.32285200_httpd.include php_admin_value open_basedir /var/www/vhosts/wor.wordswithfriends.net/httpdocs/:/tmp/:/usr/share/pear/:/local/PEAR/ php_admin_value open_basedir /var/www/vhosts/wor.wordswithfriends.net/httpdocs/:/tmp/:/usr/share/pear/:/local/PEAR/ php_admin_value open_basedir /var/www/vhosts/wor.wordswithfriends.net/httpdocs/:/tmp/:/usr/share/pear/:/local/PEAR/ php_admin_value open_basedir /var/www/vhosts/wor.wordswithfriends.net/httpdocs/:/tmp/:/usr/share/pear/:/local/PEAR/ Configured vhost.conf as follows <Directory /var/www/vhosts/wor.wordswithfriends.net/wordswithfriends.net> <IfModule sapi_apache2.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/wor.wordswithfriends.net:/tmp:/usr/share/pear/local/PEAR" </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/wor.wordswithfriends.net:/tmp:/usr/share/pear:/local/PEAR" </IfModule> </Directory> Restarted apache and the parameter is still the same. I'm not sure why my pear directories are not showing up. I'm using Plesk. Any help appreciated

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

< Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >