Search Results

Search found 47324 results on 1893 pages for 'end users'.

Page 523/1893 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • Error with Apache, Nagios and Snorby integration

    - by user1428366
    I'm trying to use apache to serve two different websites (Nagios and Snorby). The problem is that when I try to see the "/snorby" website, apache sends me the "It works" page. If I try to access to "/nagios" it works perfectly. Snorby is running under ruby passenger .This are the config files. <VirtualHost *:80> ScriptAlias /nagios/cgi-bin "/srv/nagios/sbin" <Directory "/srv/nagios/sbin"> # SSLRequireSSL Options ExecCGI AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> Alias /nagios "/srv/nagios/share" <Directory "/srv/nagios/share"> # SSLRequireSSL Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> </VirtualHost> And the other one is this: <VirtualHost *:80> #Alias /snorby "/var/www/snorby-2.6.0/public" # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /var/www/snorby-2.6.0/public <Directory /var/www/snorby-2.6.0/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> </VirtualHost> If I disable the Nagios webpage, the Snorby webpage works. I think the problem is Snorby because when I try to access to the Ip address with Nagios page disable, the webapplication redirects me to http:// myserverip/dashboard. Can anyone help me please? Thank you so much! Regards

    Read the article

  • Best Practice: Apache File Upload

    - by matnagel
    I am looking for a soultion for trusted users to upload pdf files via html forms (with maybe php involved). This is quite a standard ubuntu linux server with apache 2.x and php 5. I am wonderiung what are the benefits of the apache file upload module. There were no updates for some time, is it actively maintained? What are the advantages over traditional php upload with apache 2 without this module? http://commons.apache.org/fileupload I remember traditional php file upload is difficult with some pitfalls, will the apache file upload module improve the situation? The solution I am looking for will be part of an existing website and be integrated into the admin web frontend. Things I am not considering are webdav, ssh, ftp, ftps, ftp over ssh. Should work with a browser and without installing special client software, so I am asking about a browser based upload without special client side requirements. I can request a modern browser like firefox = 3.5 or modern webkit broser like chrome or safari from the users.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Passenger error: No such file or directory - config/environment.rb

    - by JJD
    I installed Redmine on MacOSX Server 10.6.8 according to this installation description. So far everything works fine: When I start webrick the server serves the Redmine pages. The gems and redmine are installed under the user "redmine". After that I aimed configuring apache2 with passenger as described here. As suggested by the description I also installed the passenger-pane which stores its virtual host configuration files in /private/etc/apache2/passenger_pane_vhosts. This is what I came up with after a lot of manual try and error. At least, now I can reach a passenger error page. // redmine.vhost.conf <VirtualHost *:80> ServerName host ServerAlias localhost DocumentRoot "/Users/redmine/Sites/redmine" # RackEnv production # RackBaseURI / RailsEnv production RailsBaseURI / # PassengerUser www-data # PassengerGroup www-data <Directory "/Users/redmine/Sites/redmine"> Order allow,deny Allow from all </Directory> </VirtualHost> However, the passenger module still runs into the following errors. Error message: No such file or directory - config/environment.rb The /var/log/apache2/error_log of the web server stated the following. [warn] NameVirtualHost *:80 has no VirtualHosts [notice] Apache/2.2.21 (Unix) Phusion_Passenger/3.0.12 configured -- resuming normal operations [ pid=21824 thr=2151905620 file=utils.rb:176 time=2012-06-01 18:22:07.126 ]: *** Exception Errno::ENOENT in PhusionPassenger::ClassicRails::ApplicationSpawner (No such file or directory - config/environment.rb) (process 21824, thread #<Thread:0x0000010086f2a8>): I experimented with the user switch functionality of passenger as described in the documentation - as you can tell from my configuration file. Though, I was not successful.

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • Synergy setup broke on upgrade

    - by CoatedMoose
    I had synergy setup working fine with version 1.3.7, however I got a new computer and decided to set it up as well. Because the setup I was working with was ubuntu (server - dual monitors) mac (client) and the new computer (replacing the mac) was windows, I ended up updating everything to 1.4.10. ______ ______ ______ | mac | ubu1 | ubu2 | |______|______|______| The problem is currently that dragging to the left of ubu1 causes the cursor on the mac to flicker briefly and then the cursor shows up at the bottom right corner of ubu2. Here is my .synergy.conf section: screens Andrews-Mac-Mini: ctrl = ctrl alt = meta super = alt Andrew-Ubuntu: end section: links Andrew-Ubuntu: left = Andrews-Mac-Mini Andrews-Mac-Mini: right = Andrew-Ubuntu end And the output from synergys -f NOTE: client "Andrews-Mac-Mini" has connected INFO: switch from "Andrew-Ubuntu" to "Andrews-Mac-Mini" at 1679,451 INFO: leaving screen INFO: screen "Andrew-Ubuntu" updated clipboard 0 INFO: screen "Andrew-Ubuntu" updated clipboard 1 INFO: switch from "Andrews-Mac-Mini" to "Andrew-Ubuntu" at 2398,833 INFO: entering screen

    Read the article

  • How does a vsftpd server work and how to configure it?

    - by ysap
    I was asked to configure a FTP server, based on the vsftpd package. The server is running on a remote machine to which I have a superuser privilege access. Being unfamiliar with the mechanics of FTP servers, I tried to figure out how user ftp accounts are configured. The previous maintainer used a shell script, which works on a list that we maintain to track users accounts and passwords, to configure the ftp accounts. From reading the script, I see that he generates a list of usernames and passwords, and actually creates a user account on the Linux machine. This means that for each user that we configure in the list, a new user account is being added by the adduser command: adduser --home /home/ftp --no-create-home $user (but w/o a private /home/username directory - using the /home/ftp instaed). Each of these users can log into his account using the ssh command. This fact seems a little strange to me, as I'd think that the ftp account should be decoupled from the Ubuntu user accounts. As another side effect, when a user connects using a web browser, he is connected to the /home/ftp directory. However, he can then use "Up to a higher level directory" link to go up and effectively have access to all of our system. So, the questions are: Is this really how the FTP server supposed to work in terms of configuring ftp accounts? If not, how do I configure the vsftpd server in a way that I have only the superuser Ubuntu account on that machine and all ftp account are... just FTP user accounts? Additionally, these ftp account should be configured in terms of how and what they are allowed to access.

    Read the article

  • Parsing the output of "uptime" with bash

    - by Keek
    I would like to save the output of the uptime command into a csv file in a Bash script. Since the uptime command has different output formats based on the time since the last reboot I came up with a pretty heavy solution based on case, but there is surely a more elegant way of doing this. uptime output: 8:58AM up 15:12, 1 user, load averages: 0.01, 0.02, 0.00 desired result: 15:12,1 user,0.00 0.02 0.00, current code: case "`uptime | wc -w | awk '{print $1}'`" in #Count the number of words in the uptime output 10) #e.g.: 8:16PM up 2:30, 1 user, load averages: 0.09, 0.05, 0.02 echo -n `uptime | awk '{ print $3 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $8,$9,$10 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 12) #e.g.: 1:41pm up 105 days, 21:46, 2 users, load average: 0.28, 0.28, 0.27 echo -n `uptime | awk '{ print $3,$4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $6,$7 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $10,$11,$12 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 13) #e.g.: 12:55pm up 105 days, 21 hrs, 2 users, load average: 0.26, 0.26, 0.26 echo -n `uptime | awk '{ print $3,$4,$5,$6 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $7,$8 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $11,$12,$13 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; esac

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Is there a way to tell SGE to run specific jobs as root on the execution node?

    - by Rick Reynolds
    The title kinda says it all... We're using SGE/OGE to submit jobs to a set of worker nodes that then do things with specific pieces of equipment. The programs and scripts that have been created that manipulate this equipment rely on running as root. I'd like SGE to handle allocation of resources in a way that is mindful of users, groups, projects, etc., but I also need the actual jobs to run with root permissions. I've read up on How can one run a prologue script as root in gridengine? to see if anything there was pertinent, but it seems that SGE is providing the "user@" kind of spec specifically for prolog and epilog kinds of actions. Is there any similar functionality for the job itself? I'm aware of su/sudo approaches, but that won't really work in this environment because the sudoers file isn't globally managed (i.e. I'd have to add a whole set of users to /etc/sudoers on lots of machines). I'm currently looking into a setuid kind of solution, but that would definitely be an unnecessary kind of work-around if SGE provides me a way to declare that a specific job (or jobs in a specific queue) always needs to run with a specific user's rights.

    Read the article

  • Help diagnosing Likewise Open Active Directory authentication problem

    - by purpletonic
    I have two servers which were up until recently authenticating against the companies Active Directory Domain controller. I believe a recent change to the Active Directory administrator password caused the servers to stop authenticating against AD. I tried to add the servers back to the domain using the command: domainjoin-cli join example.com adusername this seemed to work without complaints, but when I try to login via ssh with my domain account, I get an invalid password error. When I run the command: lw-enum-users it prints all of the domain users, and looking up my own account, I see that it is valid and my password hasn't expired. I also ran lw-get-status and received the following: LSA Server Status: Agent version: 5.0.0 Uptime: 0 days 3 hours 35 minutes 46 seconds [Authentication provider: lsa-activedirectory-provider] Status: Online Mode: Un-provisioned Domain: example.com Forest: example.com Site: Default-First-Site-Name Online check interval: 300 seconds \[Trusted Domains: 1\] \[Domain: EXAMPLE\] DNS Domain: example.com Netbios name: EXAMPLE Forest name: example.com Trustee DNS name: Client site name: Default-First-Site-Name Domain SID: S-1-5-24-1081533780-4562211299-822531512 Domain GUID: 057f0239-7715-4711-e64b-eb5eeed20e65 Trust Flags: \[0x001d\] \[0x0001 - In forest\] \[0x0004 - Tree root\] \[0x0008 - Primary\] \[0x0010 - Native\] Trust type: Up Level Trust Attributes: \[0x0000\] Trust Direction: Primary Domain Trust Mode: In my forest Trust (MFT) Domain flags: \[0x0001\] \[0x0001 - Primary\] \[Domain Controller (DC) Information\] DC Name: dc1.example.com DC Address: 10.11.0.103 DC Site: Default-First-Site-Name DC Flags: \[0x000003fd\] DC Is PDC: yes DC is time server: yes DC has writeable DS: yes DC is Global Catalog: yes DC is running KDC: yes [Authentication provider: lsa-local-provider] Status: Online Mode: Local system Anyone got any ideas what might be occurring? Thanks in advance!

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • LINKED TABLES BETWEEN MS ACCESS 2003 AND MS ACCESS 2007-WRITE PERMISSIONS DENIED

    - by STEVE KING
    We are in the process of switching over to ACCESS 2007. We have numerous data tables in ACCESS 2003 files. In one case, the user has 2007 on his PC and opend the front end in 2007. No problems. When the the user is done, he clicks a button that executes a macro full of update queries. The macro reaches the first query and halts. We get a messge saying we do not have permisons to write to this linked table (2003 format). There were no security files involved. We re-linked from 2007, same problem. LAN permssions were ok. I wound up having to import the tables to front end in order for the user to be able to do his job.

    Read the article

  • Windows XP SP3 client over NAT to a Windows 2008 R2 SP1 file server disconnection

    - by Patrick Pellegrino
    we just transferred a pilot group from our old(!!) Netware infrastructure to an Microsoft infrastructure. Since then, our users got problems accessing their files. They all experience disconnection from the mapped drives. The file server is access via a WAN connection by a firewall (Sonicwall) between both network and we do NAT. All clients have Windows XP SP3 and the file server is an Windows 2008 R2 SP1. On the file server I got many Event Id 2012. Many post over the Internet suggested a problem between the SMB protocol and NAT. We need a short term fix to continue to transfer users from Netware to Microsoft after what will work to remove the NATing. I found this MS KB http://support.microsoft.com/kb/2444558 that suggested a kind of workaround for Windows 7 clients but I can found anything for Windows XP. Anyone can help me with this ? We don't want to stop the project and do a network job before migrating. Regards. Update: Our few Windows 7 computers doesn't seem to have this issue.

    Read the article

  • some PDF's to iPhones via ActiveSync are corrupt

    - by longneck
    we have two server applications (one .NET/ASP web app, the other a native Windows app) that generate PDF's that are then emailed to our users on Exchange 2010. the apps deliver the emails to the Exchange server via SMTP, and our iPhone/iPad users receive their email via activesync. pretty much all of the PDF's generated by the web app and many of the PDF's generated by the Windows app fail to open on an iPhone or iPad. tapping the attachment shows the screen that would display the PDF with the name of the file at the top but the bottom of the screen is completely grey. one thing i have figured out is that the attachment on the iPad is uuencoded. forwarding the attachment to another email address shows the uuencoded format. here's a sample: begin 600 unknown M)5!$1BTQ+C0-)>+CS],-"C8@,"!O8FH\/"](6S8U-B`Q-#A=+TQI;F5A<FEZ M960@,2]%(#DQ-#8O3"`Q,S`Q.2].(#$O3R`Y+U0@,3(X-3,^/@UE;F1O8FH- ---snip--- M,C8T,"`P,#`P,"!N#0IT<F%I;&5R#0H\/"]3:7IE(#8^/@T*<W1A<G1X<F5F .#0HQ,38-"B4E14]&#0H` ` end whereas the normal version of the file looks like a normal PDF: %PDF-1.4 %âãÏÓ 6 0 obj<</H[656 147]/Linearized 1/E 9698/L 13571/N 1/O 9/T 13405>> ---snip--- trailer <</Size 6>> startxref 116 %%EOF so i think the problem is that the attachment is being double uuencoded somewhere, or the iPhone is failing to recognize that the attachment is uuencoded and not decoding it. any suggestions on where to begin troubleshooting this problem?

    Read the article

  • Windows Server 2008 Create Symbolic Link, updated Security Policy still gives privilege error

    - by Matt
    Windows Server 2008, RC2. I am trying to create a symbolic/soft link using the mklink command: mklink /D LinkName TargetDir e.g. c:\temp\>mklink /D foo bar This works fine if I run the command line as Administrator. However, I need it to work for regular users as well, because ultimately I need another program (executing as a user) to be able to do this. So, I updated the Local Security Policy via secpol.msc. Under "Local Policies" "User Rights Management" "Create symbolic links", I added "Users" to the security setting. I rebooted the machine. It still didn't work. So I added "Everyone" to the policy. Rebooted. And STILL it didn't work. What on earth am I doing wrong here? I think my user is even an Administrator on this box, and running plain command line even with this updated policy in place still gives me: You do not have sufficient privilege to perform this operation.

    Read the article

  • Can't get SSH public key authentication to work

    - by Trey Parkman
    My server is running CentOS 5.3. I'm on a Mac running Leopard. I don't know which is responsible for this: I can log on to my server just fine via password authentication. I've gone through all of the steps for setting up PKA (as described at http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ssh-beyondshell.html), but when I use SSH, it refuses to even attempt publickey verification. Using the command ssh -vvv user@host (where -vvv cranks up verbosity to the maximum level) I get the following relevant output: debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred keyboard-interactive,password debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password followed by a prompt for my password. If I try to force the issue with ssh -vvv -o PreferredAuthentications=publickey user@host I get debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred publickey debug3: authmethod_lookup publickey debug3: No more authentication methods to try. So, even though the server says it accepts the publickey authentication method, and my SSH client insists on it, I'm rebutted. (Note the conspicuous absence of an "Offering public key:" line above.) Any suggestions?

    Read the article

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • How to enable telnet with port 3306 during Master to master replication on MySQL Server

    - by Mainio
    I am trying to do Master to Master Replication in Windows Server 2008. I am successfully able to replicate all the database of Master 1 to Master 2. But I am unable to replicate the changes made on Master 2 to Master 1. Later on I found that, I can telnet to Master 1 from Master 2 with port 3306 but I am not able on telnet from Master 1 to Master 2. When I check netstat on both Master. I found the following result. I couldn't publish my public IP so I put name as Master 1 and Master 2 for their respective IP Master 1 C:\Users\XXXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 1:3306 Master 2:61566 ESTABLISHED TCP Master 1:3389 My remote:56053 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60675 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60712 ESTABLISHED TCP 127.0.0.1:60675 Master 1:3306 ESTABLISHED TCP 127.0.0.1:60712 Master 1:3306 ESTABLISHED Master 2 C:\Users\XXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 2:3389 My remote:56124 ESTABLISHED TCP Master 2:61566 Master 1:3306 ESTABLISHED TCP Master 2:61574 bil-sc-cm02:http ESTABLISHED TCP 127.0.0.1:3306 Master 2:61562 ESTABLISHED TCP 127.0.0.1:3306 Master 2:61563 ESTABLISHED TCP 127.0.0.1:61562 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61563 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61573 Master 2:3306 TIME_WAIT All shows that In my master 2, port 3306 is not activate. Now I need solution over here. How can I figure it. Your small suggestion would be million for me. Thank you Regards, Udhyan

    Read the article

  • Problems building nodejs on MacOS Snow Leopard

    - by mrwooster
    I am having trouble building nodejs on MacOS Snow Leopard. I think it might have something to do with my PATH variable not being set correctly for the developer tools location. For some reason, the Developer tools (gcc, g++, make etc) are all stored in /Developer/usr/bin I added it to my PATH variable as follows: $ export PATH=$PATH:/Developer/usr/bin $ echo $PATH /opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:/Developer/usr/bin When i try to configure it complains about not finding open-ssl, ok, not a big problem. So I try with --without-ssl : $ ./configure --without-ssl Checking for program g++ or c++ : /Developer/usr/bin/g++ Checking for program cpp : /Developer/usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /Developer/usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /Developer/usr/bin/gcc Checking for gcc : ok Checking for library dl : yes Checking for library util : yes Checking for library rt : not found --- libeio --- Checking for library pthread : yes Checking for function pthread_create : not found /Users/Guy/git_src/node/node/deps/libeio/wscript:13: error: the configuration failed (see '/Users/Guy/git_src/node/node/build/config.log') Anyone know how I can get round this? I am suspicious that it might be something to do with the PATH or another ENV variable, but not sure. Thanks G

    Read the article

  • Outside VPN traffic not able to ping site-to-site VPN remote site

    - by Siriss
    we have two ASA 5510s running 8.4 in a site-to-site VPN setup. All internal traffic is working smoothly. Site/Subnet A: 192.100.0.0 - local Site/Subnet B: 192.200.0.0 - remote VPN Users: 192.100.40.0 - assigned by ASA When you VPN into the network, all traffic hits Site A, and everything on subnet A is accessible. Site B however, is completely inaccessible for VPN users. All machines on subnet B, the firewall itself, etc... is not reachable by ping or otherwise. I know I am missing a NAT rule, and in 8.2, it was easy as pie to setup using ASDM, but now I can't get it for the life of me as 8.4 apparently made a lot of changes to NAT rules. I am not too comfortable in the ASA command line, but if there is a command I need to add or if you could direct me where I can add this in 8.4 ASDM I would really appreciate it. I have tired NAT Exempt, Static NAT, Static NAT Policies, etc... I think I tried all the options. I also might have my interfaces confused with the new look at feel of ASDM. Thank you much in advance and I hope I have been thorough enough.

    Read the article

  • long access times and errors in iis application

    - by user55862
    I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >