Search Results

Search found 34556 results on 1383 pages for 'setting as default'.

Page 383/1383 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • VLAN Tagging Traffic on Cisco Switch

    - by David W
    I have a situation where I'm setting up multiple VLANS on a pfSense firewall on the same physical interface for a client. So in pfSense, I now have VLAN 100 (employees) and VLAN 200 (students - student computer lab). Downstream from pfSense, I have a Cisco SG200 switch, and coming off of the SG200 is the student lab (running on a Catalyst 2950. Yes, that's old, but it works, and this is a poor nonprofit we're talking about). What I'd like to do is tag everything on the network as VLAN 100, except for the student computer lab. Earlier today when I was on-site with the client, I went into to the old Catalyst 2950, and assigned all of its ports to access VLAN 200 (switchport mode access vlan 200) without setting up a trunk on the Catalyst or on the SG200. Looking back on it, I now understand why internet in the lab broke. I reverted the lab back to the default VLAN1 (we're still running on a different firewall - we haven't deployed pfSense -, and the traffic is still separated physically). So my question is, what do I need to do in order to properly deploy this scenario? I believe the correct answer is: Ensure VLANs 100 and 200 are setup in pfSense, and that DHCP is operating correctly (on separate subnets) Setup a trunkport VLAN that allows both 100 & 200 traffic, and plug that port directly into pfSense. Setup a VLAN 200 trunkport on the SG200 (It's not running iOS, but if it were, the command would be switchport trunk native vlan 200), which will then plug into the Catalyst 2950. Setup a VLAN 200 trunkport on the Catalyst 2950 (that is plugged into the SG200 VLAN200 port with the same command - switchport trunk native vlan 200) Setup the rest of the ports on the old Catalyst 2950 in the lab to be access ports on VLAN200. Is there anything that I'm missing, or do I need to tweak any of these steps, in order to properly segment the network traffic?

    Read the article

  • Adding 2nd DC to the domain from a different subnet over VPN.

    - by EagerToLearn
    I'm in the process of adding a second DC to our domain and just want to make sure I have all the steps right before proceeding. Info: DC1 is 2008 R2 Standard. DC2 is 2008 R2 Standard. Network1 is 192.168.39.x/24 Network2 is 10.0.0.x/24 VPN is Sonicwall. The 2 DC's will be at two different sites, but the networks are connected by hardware VPN. (Sonicwall). The main DC server will be on the 192.168.39.0/24 network. The 2nd DC will be on 10.0.0.0/24. Here are the steps I plan to take; please let me know if I'm missing anything. Part 1: AD Sites and Services on DC1, create a new site and subnet for DC2. (Or should I create a new one for both?) (Can I use the default IPSiteLink and not change anything in there other than refresh timer?) Part 2: Point the DNS of DC2 to DC1. Run /forestprep and /domainprep (on both, or just DC1?). Dcpromo and select "Additional Domain Controller for Existing Domain". Then continue with normal steps with default locations for databases. EDIT: Didn't realize this was like reddit and required two skipped lines to skip one :P EDIT 2: When DCPromo-ing DC2, do I need to have "Append primary and connection specific DNS" and "Append parent suffixes of the primary DNS suffix" checked?

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Simple way to set up port knocking on Linux?

    - by Ace Paus
    There are well known benefits of Port Knocking utilities when utilized in combination with firewall IP table modification. Port Knocking is best used to provide an additional layer of security over other tools such as the OpenSSH server. I would like some help setting it up on a ubuntu server. I looked at some port knocking implementations here: PORTKNOCKING - A system for stealthy authentication across closed ports. IMPLEMENTATIONS http://www.portknocking.org/view/implementations fwknop looked good. I found an Android client here. And fwknop (both client and server) is in the ubuntu repos. Unfortunately, setting it up (on the server) looks difficult. I do not have iptables set up. My proficiency with iptables is limited (but I understand the basics). I'm looking for a series of simple steps to set it up. I only want to open the SSH port in response to a valid knock. Alternatively, I would consider other port knocking implementations, if they are much simpler to set up and the desired Linux and Android clients are available.

    Read the article

  • trouble running multiple domains on tomcat behind apache via mod_jk

    - by mkoryak
    I am having trouble setting up tomcat6 with 2 virtual hosts, behind apache2. if i have just one host defined in tomcat, and one jk worker, everything works fine. as soon as i define another jk worker and a corresponding tomcat host i get this error in jk.log: 9:3075328656] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (69.164.218.75:8009) (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_send_request::jk_ajp_common.c (1507): (dogself) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] ajp_service::jk_ajp_common.c (2447): (dogself) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_service::jk_ajp_common.c (2466): (dogself) connecting to tomcat failed. [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=dogself my tomcat server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="nousophia.com" appBase="webapps-test" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: # workers.properties - ajp13 # # List workers worker.list=dogself,nousophia # Define dogself worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.nousophia.port=8009 worker.nousophia.host=nousophia.com worker.nousophia.type=ajp13 tomcat is started/restarted i followed these directions for setting it up: http://stackoverflow.com/questions/1765399/linking-apache-to-tomcat-with-multiple-domains can someone confirm that it would work as above?

    Read the article

  • Which isn't working on linode servers (Ubuntu 10.04)?

    - by chrisjlee
    Currently trying to configure a linode server running on ubuntu 10.04. I utilized a stackscript (Default drupal profile) which seemed to run successfully. The log indicate so as well. Then ssh'd into the server (as root) to try to configure php. When i run a which php, which php5 they both return nothing. A which python returns something though. I know where the default path to php but i usually just like to use it as confirmation that php exists. Do i have to modify some configurations to enable which to work? Also tab completion doesn't seem to work for when i apt-get install? Update: Thanks for the suggestions guys. I've ran a couple commands and no luck either: [ root@ ~ ] $ dpkg -l |grep php [ root@ ~ ] $ apt-get install php5-cli Reading package lists... Done Building dependency tree Reading state information... Done Package php5-cli is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package php5-cli has no installation candidate Then i tried installing php and php cli: [ root@ ~ ] $ sudo apt-get install php5 php5-cli sudo: unable to resolve host xxxxxxx Reading package lists... Done Building dependency tree Reading state information... Done Package php5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package php5 has no installation candidate

    Read the article

  • Is there a way to prevent password expiration when user has no password?

    - by Eric DANNIELOU
    Okay, we all care about security so users should change their passwords on a regular basis (who said passwords are like underwear?). On redhat and centos (5.x and 6.x), it's possible to make every real user password expires after 45 days, and warn them 7 days before. /etc/shadow entry then looks like : testuser:$6$m8VQ7BWU$b3UBovxC5b9p2UxLxyT0QKKgG1RoOHoap2CV7HviDJ03AUvcFTqB.yiV4Dn7Rj6LgCBsJ1.obQpaLVCx5.Sx90:15588:1:45:7::: It works very well and most users often change their passwords. Some users find it convenient not to use any password but ssh public key (and I'd like to encourage them). Then after 45 days they can't log in as they forgot their password and are asked to change it. Is there a way to prevent password expiration if and only if password is disabled? Setting testuser:!!:15588:1:45:7::: in /etc/shadow did not work : testuser is asked to change his password after 45 days. Of course, setting back password expiration to 99999 days works but : It requires extra work. Security auditors might not be happy. Is there a system wide parameter that would prompt the user to change expired password only if he really has one ?

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Firefox: Unload a tab manually

    - by unor
    Firefox has a setting "Don't load tabs until selected" (see How do I make Firefox 13 Load All My Tabs on Startup or when Resuming Reload). I like that behaviour. I am searching for a way to "deload"/deactivate a tab manually for a session (until I reload it). It should stop all running JavaScript functions and plugins (like Flash). The whole webpage content may disappear until I reload/re-activate the tab, but that is not a requirement. The title has to be displayed as tab label (like it is the case with the startup setting, too). The workaround would be to restart Firefox and don't switch to the tab I want to be deactivated. This is pretty annoying, of course. EDIT: Here is what I found so far (thanks, @bytebuster!) BarTab no longer being maintained (see why) BarTab Lite seems to miss this functionality from BarTab Dormancy experimental; comes with warning that it "may eat your session" Tab Mix Plus Feature request: Unload Tab feature Tab Utilites seems to offer this functionality only for automatic unloading Feature request: Add "unload tab" to tab context menu. UnloadTab removed from addons.mozilla.org (who knows why)

    Read the article

  • CentOS 5 : error in installing php-imap

    - by TMMDev
    can someone please help me with this CentOS 5 question? I am trying to install php-imap, i tried yum install php-imap but I am getting the following output: Loaded plugins: fastestmirror, priorities, security Loading mirror speeds from cached hostfile * base: centos.hostingxtreme.com * epel: mirror.steadfast.net * extras: mirror.team-cymru.org * updates: mirror.beyondhosting.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-imap.x86_64 0:5.1.6-44.el5_10 set to be updated --> Processing Dependency: php-common = 5.1.6-44.el5_10 for package: php-imap --> Finished Dependency Resolution php-imap-5.1.6-44.el5_10.x86_64 from updates has depsolving problems --> Missing Dependency: php-common = 5.1.6-44.el5_10 is needed by package php-imap-5.1.6-44.el5_10.x86_64 (updates) Error: Missing Dependency: php-common = 5.1.6-44.el5_10 is needed by package php-imap-5.1.6-44.el5_10.x86_64 (updates) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. I already have php-common installed, I ran "yum install php-common" and got the following output Loaded plugins: fastestmirror, priorities, security Loading mirror speeds from cached hostfile * base: centos.hostingxtreme.com * epel: mirror.steadfast.net * extras: mirror.team-cymru.org * updates: mirror.beyondhosting.net Setting up Install Process Package matching php-common-5.1.6-44.el5_10.x86_64 already installed. Checking for update. Nothing to do how can I fix this problem?

    Read the article

  • Group Policy for DNS Server Addresses

    - by John
    This question deals strictly with the methodology of using Active Directory to publish DNS settings and controlling the DNS server address order to domain attached clients. This is not asking about if this is the right function, role, or best use of group policy; but rather, how to make it work. I would like to be able to publish in group policy DNS addresses and possibly control the order in which the client consumes the addresses. I have tried following the information from this site without success. I set the following setting within group policy, but the client never shows the settings within the TCP/IP properties. Computer Configuration/Administrative Templates/Network/DNS Client/DNS Servers I did list them as a single space separated list. For example, the following: 192.168.0.1 192.168.10.3 192.168.34.2 192.168.2.67 192.168.56.99 192.168.99.23 This would be for Windows 7, Windows Server 2003, and Windows Server 2008 clients. I am not sure what I am doing wrong or how to get this to work. Am I missing a setting? Do I need to set something differently?

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • How to host multiple FLEX applications in IIS7

    - by Devtron
    Hello, I manually deploy a FLEX application to my web server (IIS 7). There are two virtual directories, 1.) Default 2.) myFlexApp1. myFlexApp1 is where my working FLEX application resides. I now need to deploy a different FLEX application (let's call it myFlexApp2) to the same web server. I set up a virtual directory for [myFlexApp2] and it complains about the "bindings" using port 80, which is already used by [myFlexApp1]. I have tried to give them separate host names in their bindings properties. For example, myFlexApp1.mydomain.com and myFlexApp2.mydomain.com. I can never get [myFlexApp2] to show from an external browser. I was able to get only one or the other to display, but never could run both. Here is what I need: myFlexApp1.mydomain.com -- myFlexApp1 calendar.mydomain.com -- myFlexApp2 test.mydomain.com -- myFlexApp1 where test.mydomain.com is the default URL. Is this possible? What am I doing wrong? I even tried to edit the hosts file in [C:\Windows\System32\drivers\etc] but that didnt work either. How can I serve up two FLEX applications on IIS 7? It shouldn't be this hard!

    Read the article

  • How to Set up MySQL Server to utilize more memory

    - by Cyril Gupta
    Hi there, I have MySQL setup on Windows along with Plesk. The version is 5.0.45 Community. The databases I have on the server are MyISAM as well as InnoDb, but predominantly innodb. I had 8G memory on my server, but MySQL isn't going up more than 1.3G and tweaking the settings isn't helping. I tried to increase the memory allocation for innodb_buffer_pool_size, it works if I set it up to 1G, but if I set 2G, or above the server doesn't come back online! I want mySQL to use at least 5-6 Gigs of the memory I have for performance, but I can't get this to work. Can anyone please help? My mysql config file is below (there are 2 mysqld sections... when i used MySQL workbench it created another one!) [MySQLD] port=3306 basedir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL datadir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL\\Data default-character-set=latin1 default-storage-engine=INNODB query_cache_size=128M table_cache=1024 tmp_table_size=32M thread_cache=32 myisam_max_sort_file_size=100G myisam_max_extra_sort_file_size=100G myisam_sort_buffer_size=2M key_buffer_size=32M read_buffer_size=16M read_rnd_buffer_size=2M sort_buffer_size=8M innodb_additional_mem_pool_size=24M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=10M innodb_buffer_pool_size=1G innodb_log_file_size=10M innodb_thread_concurrency=8 max_connections=700 key_buffer=48M max_allowed_packet=5M sort_buffer=2M net_buffer_length=4K old_passwords=1 wait_timeout=20 connect_timeout=60 [client] port=3306 [mysqld] query_cache_min_res_unit = 4096 innodb_additional_mem_pool_size = 1048576 innodb_buffer_pool_size = 1G query_cache_limit = 1048576 key_buffer_size = 8388608 sort_buffer_size = 2097144 query_cache_type = 1 query_cache_size = 312M log-slow-queries connect_timeout = 5 wait_timeout = 20 thread_cache_size = 15 read_buffer_size = 131072 table_cache = 64

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • Windows Server 2008 ignores any change made to firewall

    - by Maurice Courtois
    I have been trying for the last 2 hours to make my Windows Server 2008 answer ping. I have tried almost every single solution I have found on the web, so far nothing work. My current setup: 2 NIC (1x Internet connection, 1x Local network) Server act as VPN server. So I set the corresponding NIC as either Public or Private. I also enable the rule for "File and Printer Sharing (Echo Request...)" for all Nic and from any IPs. I always been able to ping from the local network or the local ip while connected to the VPN. I also tried to create a specific rule for ICMP ping and disabling the firewall for all but the public nic. Regardless of all this, I still can't ping that server from Internet. Any idea suggestion what could cause this? I have the impression that when you set the server as VPN (I switch the box on when setting it up to block everything else than VPN connection) that changing anything to the firewall setting thought mmc is pointless !?!?

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Logitech Performance MX Mouse Jumps on OS X Lion (10.7.4)

    - by Adam Thompson
    I have a Logitech MX Revolution wireless mouse that I am trying to use with OS X Lion. Everything is working except for one problem... there is a small, but quite noticeable, jump when the mouse cursor is moved. The problem is mostly prevalent when dragging and dropping files or trying to highlight items. It makes performing any task with the mouse accurately next to impossible. I did quite a bit of looking and found that all kinds of people have had mouse issues with OS X. I've tried all of the following with absolutely no success: Using the official drivers from Logitech (these performed worse than the default mouse drivers in OS X) Using SteerMouse as a third party mouse driver. This worked ever so slightly better than the default driver, but still suffered quite frequently from the skipping problem Cleaning the sensor on the mouse and ensuring it's not the result of the surface that it's being used on. Tested the mouse on a Windows machine. The mouse worked absolutely flawlessly on the other machine. Changed the channel that my wireless router operates on by the off chance my problems were the result of interference. This also had no effect. I can't think of anything else that could possibly interfere with the mouse. I'm am out of ideas on what to try, so I would really appreciate if anyone has any suggestions. I should also mention that an old wired mouse I had laying around worked just fine when I plugged it in. This really isn't the best solution, however, as I really prefer the MX Revolution.

    Read the article

  • How to play 24 fps video smoothly on a 60Hz display? (or which player supports frame interpolation?)

    - by netvope
    I use mpc-hc to play videos on Win7 x64. With the default settings (#1), video playback is great most of the time. But for panning shots, playback is not smooth. I stepped through the video frame by frame and found that the panning movement is smooth (e.g. each frame shifts horizontally by 10 pixels), so the problem is how the 23.976 fps video is interpolated to 60Hz. The judder looks like what would be caused by a "2:3 pulldown", where the frames are played unevenly like: frame 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, etc (#2) Using "optimal renderer settings" (#3) instead of the default disables the Aero theme and causes tearing. Setting my LCD display to 50Hz may have improved the judder slightly (but I can't really tell). My display does not support 24Hz or 48Hz, and forcing them in the Nvidia control panel gives blurry screen. I've tried other video players (VLC and KMPlayer), the ReClock Directshow Filter, video files from different sources (#4), turning on/off DXVA, and a computer with a different GPU, but the judder in the playback is similar. None of them solved the problem. So, how can I play 23.976 or 24 fps video smoothly on a 60Hz display? I think a video player could make the video smoother by doing linear interpolation, such as: 1. 100% frame 1 2. 60% frame 1 + 40% frame 2 3. 20% frame 1 + 80% frame 2 4. 80% frame 2 + 20% frame 3 5. 40% frame 2 + 60% frame 3 6. 100% frame 3 7. 60% frame 3 + 40% frame 4 .. etc Can any existing video player do this? Footnotes: (#1) Video renderer: EVR Custom Pres. (#2) This example converts a 24 fps video into 30 fps (#3) View Renderer settings Reset Reset to optimal renderer settings (#4) The files I have are all H.264 mkv files, but I don't think the file format/encoding matters.

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >