Search Results

Search found 7073 results on 283 pages for 'shared printers'.

Page 221/283 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • will heavy network traffic affect other connections on HP ProCurve V1810-48G?

    - by nn4l
    I have a HP ProCurve V1810-48G switch with a few servers connected to it (everything in one rack). The switch is practically in its default configuration. During copying of a few hundred GByte of data from server_a to server_b (using tar cf - data | ssh server_b 'cd myhome; tar xf -'), essentially saturating the network capacity between those two servers, I noticed network related error messages on the console of server_c - as if server_c is no longer able to send/receive traffic to server_d. After canceling the copy command everything was normal again. I would understand this if the network connection would use a shared resource, for example if server_a and server_c are in one datacenter, server_b and server_d are in another datacenter and both datacenters are connected with a 100 MBit line. But all of the mentioned servers are connected to the same switch and are located in the same IP network. I always thought that a connection between two servers on one switch will not affect any other server connected to the switch. It is also possible that the network related error messages are caused by something else - but I can't risk a network problem for any other system on this switch. Please advise.

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • How to create domain or router-level workgroup (dd-wrt micro)

    - by Anthony
    In Windows, is active directory required for using "Domain" instead of "workgroup"? Do I need to register a domain with a DNS provider like godaddy? What I really want to do is set up my home LAN so that everyone connecting to the main router (which is everyone, which is about 30 people) can see each other. I've tried having everyone use the same work group name, still hit or miss. I tried setting the domain name and host name on the router itself, still nothing. I've tried joining the domain name I set instead of work group, and I get an AD error. But ideally, everyone who is connected to the main router should simply just see each other and any shared folders. I've had this problem when I was not the network admin on other large LANs, and I've never been able to figure out why sometimes people disappear or never see each other. I'd really prefer using the native sharing functionality in the OS to setting up an internal FTP or Samba server, etc. Any sure-fire ways to fix this? (maybe an open source clone of AD?) Thanks!

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • 2 servers on 2 networks in same office

    - by irot
    Hello Gents, My office doesn't have a "server guy" in employ, so I'm stuck with having to fix server issues for now. There are 2 servers in our office, both are file/web servers only accessible via LAN. They are currently on the same network, so no issue there. Problem is, we recently got a static IP to use, but it's with a different ISP, so now we have 2 routers in our office. I would like to open one of the servers to the public as a web/FTP server. But if I hook a server up to the new router, users will no longer be able to access the files shared on that server (because they're on different networks). How can I go about making one server accessible to the public using the static IP line, but still able to share the files on it to the users connected to the other network? The server I want to make public is running Windows Server 2008, the other server Windows Server 2003. And as far as I know, IP addresses are assigned by the router. I'm just a developer, don't know much about networking. Thank you in advance.

    Read the article

  • Including email, IMs, configs, etc. in documentation or notes

    - by Jason Antman
    The shop I work in is pretty laid-back. We're on a documentation kick, only because historically we've been very bad with it. We do a lot of our brainstorming in face-to-face meetings, and also do a lot of communication via IM in addition to email. While I'm usually pretty good about documentation and keeping copious lab notes, I just finished a build of a host and spent hours searching through IMs, emails, files on my workstation, etc. to pull out anything I missed in my lab notes, which formed a large amount of the basis for the internal documentation. Does anyone have any thoughts on, aside from manually saving things to a project directory, managing various data sources (especially email and IM) and tracking them on project basis? Ideally, I'd like an easy way to put copies of emails, IM logs, etc. into a project-specific directory on my workstation and then just have a cron job that syncs that up with a shared folder. This isn't really a candidate for anything more advanced, as the bulk of the data will be copies of configs, code, etc. Here are the big restrictions: Email is via a centralized Zimbra install, so nothing can happen server-side. My workstation is Linux. Aside from writing Pidgin and Thunderbird plugins that let me tag chats and emails as belonging to a project, and then copy them to the appropriate place... any thoughts? Suggestions? Thanks, Jason

    Read the article

  • Need Help getting perl module DBD::mysql installed for bugzilla on RedHat.

    - by Alos Diallo
    Hi everyone I am having some issues getting Bugzilla setup, I have the software on the server and am trying to get the pre-rec's setup. I am using RedHat 4.1.2-42. I have all of the required perl modules save one:DBD::mysql When I try: sudo perl install-module.pl DBD::mysql I get the following response(this is only an excerpt): rm -f blib/arch/auto/DBD/mysql/mysql.so LD_RUN_PATH="/usr/lib64/mysql:/usr/lib64:/lib64" /usr/bin/perl myld gcc -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic dbdimp.o mysql.o -o blib/arch/auto/DBD/mysql/mysql.so \ -L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto \ /usr/bin/ld: skipping incompatible /usr/lib/libssl.so when searching for -lssl /usr/bin/ld: skipping incompatible /usr/lib/libssl.a when searching for -lssl /usr/bin/ld: cannot find -lssl collect2: ld returned 1 exit status make: * [blib/arch/auto/DBD/mysql/mysql.so] Error 1 /usr/bin/make -- NOT OK Running make test Can't test without successful make Running make install make had returned bad status, install seems impossible I then tried the following: CFLAGS="-I/usr/lib64/mysql:/usr/lib64:/lib64" perl install-module.pl DBD::mysql I get the same result. I have also tried to install it using CPAN but also get the same result. Right now I have DBD-mysql v3.0007 but need (v4.00) Also when I try to install open ssl it says I have the latest version. Does anyone know what I have to do to get this to work? Any help would be greatly appreciated. Thank you

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

  • outlook security alert after adding a second wireless access point to the network

    - by Mark
    Just added a Netgear WG103 Wireless Access Point in our conference room to allow visitors to access the internet through out internal network. When switched on visitors can connect to the intenet and everything works fine. Except, when the Access Point is switched on, normal users of the network get a Security Alert when they try to start Outlook 2007. The Security Alert is the same as the one shown in question 148526 asked by desiny back in June 2010 (http://serverfault.com/questions/148526/outlook-security-alert-following-exchange-2007-upgrade-to-sp2) rather than "autodiscover.ad.unc.edu" my security alert references our "Remote.server.org.uk". If I view the certificate it relates to "Netgear HTTPS:....", but the only Netgear equipment we have is the new Access Point installed in the conference room. If the Access Point is not switched on we do not get the Security Alert. At first I thought it was because we had selected "WPA-PSK & WPA2-PSK" Network Authentication Type but it continues to occur even if we opt for "Shared Key" WEP Data Encryption. I do not understand why adding a Netgear Wireless Access point would cause Outlook to issue a Security Alert when users try to read their email. Does anyone know what I have to do to get rid of the Security Alert? Thanks in advance for reading this and helping me out.

    Read the article

  • Own server, multiple website: most secure PHP setup

    - by plua
    Hi there, We have a company server with a variety of websites. They are maintained by different people from within our company. All websites are public. The server access is limited to our company only. This is NOT a shared hosting environment. We are looking into securing the server, currently analyzing the risk related to permissions of files. We feel the highest risk is when files are uploaded and then opened/executed by the public. This should not happen, but an error in a script might allow people to do so (there are image uploaders, file uploaders, etc). Uploader scripts use PHP. So the question is: what is the best way of setting / organizing permissions of files and processes? There seem to be several options to run PHP (and Apache), and setting the permissions. What should we take into consideration? Any tips? We are considering mod_php and FastCGI, but perhaps given our situation other solutions are preferred?

    Read the article

  • Port 22 is not responding

    - by Emanuele Feliziani
    I'm trying to make the jump to VPS from shared hosting for better performances and greater flexibility, but am stuck with the fact that I can't access the machine via ssh. First of all, the machine is a CentOS 6.3 cPanel x64 with WHM 11.38.0. Sshd is running (it appears in the current running processes). Making a port scan I see that port 22 is not responding. Port 21 is, but I am not able to access the machine via ftp (I think it's a security measure, but I don't know where to disable/enable it). So, I'm stuck in WHM and have no way to access the configuration of the machine, neither via ssh nor with ftp/sftp. When trying to connect with ssh via Terminal I only get this: ssh: connect to host xx.xx.xxx.xxx port 22: Operation timed out I also tried to access with the hostname instead of the IP address and it's the same. There seem to be no firewall in WHM and I have whitelisted my home IP address to access ssh, though there were no restrictions in the first place. I have been wandering through all the settings and options in WHM for several hours now, but can't seem to find anything. Does anybody have a clue as to where I should start investigating? Update: Thanks everyone. It was in fact a matter of firewall. There was a firewall not controlled by the WHM software. I managed to crack into the console from the vps control panel (a terrible, terrible java app that barely took my keyboard input) and disabled the firewall altogether running service iptables stop so that I was able to access the console via ssh with the terminal. Now I will have to set up the firewall again because the command I ran looks like having completely wiped the iptables. Can you recommend any newby-friendly resource where I can learn how to go about this and what should I block? Or should I just go with something like this: http://configserver.com/cp/csf.html ? Thanks again to everyone who helped me out.

    Read the article

  • Windows 7 ssh file server.

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • Password Authentication Fails - NTLMv2

    - by JMeterX
    Environment: Windows 2000 sp4 EDIT: Domain Controller with no trust setup with the Win2008 Server Windows XP machines Windows 2008 Server Netapp NAS Problem: We have a shared folder that resides on a NAS using a Windows 2008 AD for the authentication with the proper permissions setup. When the Windows 2000 machine tries to open the share residing on the Win2008 machine, it is prompted for a username and password. Upon entering the credentials it continuously re-asks for credentials. Important Details: The Windows 2000 machine can ping both the XP machines and the Windows 2008 Server The Windows 2008 machine is mandated to only use NTLMv2 The Windows 2000 machine was originally set to NTLM but was recently switched to NTLMv2 if negotiated for the purpose of trying to connect to the share. As I am sure it will come up, we are using Windows 2000 because of contractual obligations Questions: Why is password Authentication failing in this case? After setting a GPO for the Win2000 machine for it to use NTLMv2, do we need to reboot the machine for the changes to take affect? We used SECEDIT to update the GPOs without rebooting. UPDATE We checked both of the 2008 Domain Controllers to find an error code. We received: Microsoft_Auth_Package_V1_0 0xc000006a Event ID: 4776 I know this to be an authentication error via THIS article "The value provided as the current password is not correct" We know this password to be correct, but since these two domains (Win2000 & Win2008) do not have a trust setup what authentication account needs to be used? One that resides on the Win2000 hosted domain?

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • Window 7 Host does not answer to ping

    - by gencha
    Today I tried printing on a shared printer on one of our homegroup members. Sadly it did not work (printer marked as offline). Shortly after, I noticed I can't even ping the machine that owns the printer (I also can not remotely access it in any other way I've tried). Currently I'm trying to ping the machine from the router both computers are connected to (and my machine in question doesn't answer). I do receive the echo requests (as verified with WireShark). I also added a rule in the Windows Firewall to specifically allow ICMP echo requests, but that didn't change anything. I also tried netsh firewall set icmpsetting 8 enable, but that didn't change anything either. Completely disabling the Windows Firewall has no effect on the issue either. One has to wonder, where does Windows log when and why it ignored any incoming packets? How can I get to the bottom of this? Here are some ways I found to dig deeper into the issue: Enabling logging on the Windows Firewall Enabling Windows Filtering Platform Auditing Both methods at least give more insight into the issue. The plain log file is full of entries like this: 2011-11-11 14:35:27 DROP ICMP 192.168.133.1 192.168.133.128 - - 84 - - - - 8 0 - RECEIVE So the ICMP packets are being dropped as if that was intended. The Event Viewer now gives a little bit more details: The Windows Filtering Platform has blocked a packet. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 192.168.133.1 Source Port: 0 Destination Address: 192.168.133.128 Destination Port: 8 Protocol: 1 Filter Information: Filter Run-Time ID: 214517 Layer Name: Receive/Accept Layer Run-Time ID: 44 This same entry is always repeated with 2 points of information changing: Process ID: 420 Application Name: \device\harddiskvolume2\windows\system32\svchost.exe The service host with the PID 420 is the host for the following services: Windows Audio DHCP Client Windows Event Log HomeGroup Provider TCP/IP NetBIOS Helper Security Center Additionally, there is currently this problem with the same machine: Even though my network is set to be a "Home network", I am unable to create a new homegroup.

    Read the article

  • IIS7 ASP.NET application - 2 identical apps in 2 identical app pools, 1 is responsive and 1 is not

    - by Ben
    I have an ASP.NET (v4.0) web app that is installed in a virtual directory (as an application) and is hosted in it's own app pool. This is repeated for each instance of the app (i.e. per customer). The app pools are integrated (not classic) mode and LoadUserProfile is set to true. Otherwise, default settings. Each instance currently has it's own copy of the code/config, and it's own data folder (basic file read/writes). 1 instance of this app runs well (operation used for comparison takes ~4 seconds). Every other instance runs slowly (from 10-25 seconds for the same operation). If I move the slower instance to the "fastest" app pool that instance springs to life. If I move the faster instance into the slower app pool that instance slows to a crawl. The app pools were created in the same way initially - manually. I later used the powershell copy routine to ensure an exact copy of the faster app pool and still the same behaviour. Comparing the apppool.config files shows they are identical barring the virtual directory assignments. There are no shared resources that are being blocked, so far as I can tell, and I tested that by shutting down the performant app pool and restarting... slow is still slow, and then when I restart that app pool (so it's loaded last) it's still faster...

    Read the article

  • Windows Server 08 R2 file share File locking, OSX clients

    - by Keith Loughnane
    I've spent the last two weeks banging my head against this wall. I think I'm starting to understand the problem though. I manage a design company and they have 5 macs (OSX 10.5/.6/.7) connected over SMB to a Windows 2008 R2 file server, another machine functions as Domain Controller (that might not matter). All the macs can connect ok, no issues finding the server or logging in. For the most part things are ok. The problem is files locking up. I thought it was a permissions issue at first but it seems to be file locking. The users open a file; .ind, .pdf etc the file opens, the software reads it and closes it. That's fine, but the folder above the folder locks, it can't be moved and it can't be renamed. Eg: /Working/Project01/Imagefiles/image.pdf /Finished/ The user opens image.pdf, closes it and wants to move the whole Project01 folder into Finished. It gives a username/pass dialogue and then does nothing, no error, or just does nothing. Trying to rename gives a dialogue that says you don't have permission. It looks like it's looking for permission locally, which is why I spent about a week looking at that. Eventually I found that Finder on the macs seems to be keeping the folders open. I can work around it by Killing finder, remounting the shared drive or closing the file through the server manager but this just proves the theory it's not a solution. Has anyone dealt with this problem?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • How can I print from my lion mac mini to my windows XP, with simple file sharing?

    - by Jules
    I have quite a complicated setup, perhaps. And a lot of history on this issue, I'm hoping that I don't have to buy a new printer. I've got a HP Wireless USB Print Server, which requires client software, I can't just use it as an IP Printer. The HP software is pretty poor on the mac and is no longer supported and often locks up the printer server and takes some considerable effort to actually print something. Let alone if a windows machine attaches to it first. My printer is an Epson Stylus R285. However, the windows client software is fine and we can print from windows 7 / XP without problem. We have simple file sharing setup as this is the only way I could get windows XP to talk to windows 7. However, I can't seem to get my mac mini to connect as anything other than a guest to my xp machine, to connect to the shared printer. I'm not considering some kind of internet printing as this would seems the simplest solution. But I'm not sure what will work with my setup ?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >