Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 351/453 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • IIS7 ASP.NET application - 2 identical apps in 2 identical app pools, 1 is responsive and 1 is not

    - by Ben
    I have an ASP.NET (v4.0) web app that is installed in a virtual directory (as an application) and is hosted in it's own app pool. This is repeated for each instance of the app (i.e. per customer). The app pools are integrated (not classic) mode and LoadUserProfile is set to true. Otherwise, default settings. Each instance currently has it's own copy of the code/config, and it's own data folder (basic file read/writes). 1 instance of this app runs well (operation used for comparison takes ~4 seconds). Every other instance runs slowly (from 10-25 seconds for the same operation). If I move the slower instance to the "fastest" app pool that instance springs to life. If I move the faster instance into the slower app pool that instance slows to a crawl. The app pools were created in the same way initially - manually. I later used the powershell copy routine to ensure an exact copy of the faster app pool and still the same behaviour. Comparing the apppool.config files shows they are identical barring the virtual directory assignments. There are no shared resources that are being blocked, so far as I can tell, and I tested that by shutting down the performant app pool and restarting... slow is still slow, and then when I restart that app pool (so it's loaded last) it's still faster...

    Read the article

  • Window 7 Host does not answer to ping

    - by gencha
    Today I tried printing on a shared printer on one of our homegroup members. Sadly it did not work (printer marked as offline). Shortly after, I noticed I can't even ping the machine that owns the printer (I also can not remotely access it in any other way I've tried). Currently I'm trying to ping the machine from the router both computers are connected to (and my machine in question doesn't answer). I do receive the echo requests (as verified with WireShark). I also added a rule in the Windows Firewall to specifically allow ICMP echo requests, but that didn't change anything. I also tried netsh firewall set icmpsetting 8 enable, but that didn't change anything either. Completely disabling the Windows Firewall has no effect on the issue either. One has to wonder, where does Windows log when and why it ignored any incoming packets? How can I get to the bottom of this? Here are some ways I found to dig deeper into the issue: Enabling logging on the Windows Firewall Enabling Windows Filtering Platform Auditing Both methods at least give more insight into the issue. The plain log file is full of entries like this: 2011-11-11 14:35:27 DROP ICMP 192.168.133.1 192.168.133.128 - - 84 - - - - 8 0 - RECEIVE So the ICMP packets are being dropped as if that was intended. The Event Viewer now gives a little bit more details: The Windows Filtering Platform has blocked a packet. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 192.168.133.1 Source Port: 0 Destination Address: 192.168.133.128 Destination Port: 8 Protocol: 1 Filter Information: Filter Run-Time ID: 214517 Layer Name: Receive/Accept Layer Run-Time ID: 44 This same entry is always repeated with 2 points of information changing: Process ID: 420 Application Name: \device\harddiskvolume2\windows\system32\svchost.exe The service host with the PID 420 is the host for the following services: Windows Audio DHCP Client Windows Event Log HomeGroup Provider TCP/IP NetBIOS Helper Security Center Additionally, there is currently this problem with the same machine: Even though my network is set to be a "Home network", I am unable to create a new homegroup.

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Setting up virtualbox for outside access

    - by Morgan Green
    I have a computer running a server that my subdomain on my shared hosting account points to. IE subdomain.mydomain.org goes to my home server. Now then; what I'm wanting to do is be able to access my VirtualBox servers through that subdomain and a different port. E.G Ubuntu Virtual Box Server 1 Username:Ubuntuhost1 Password:MyUbuntuHost1 Port:4000 Internal IP: 192.168.1.60 External IP: 24.29.138.45 Ubuntu Virtual Box Server 2 Username:UbuntuHost2 Password:MyUbuntuHost2 Port:4001 Internal IP: 192.168.1.61 External IP: 24.29.138.45 Now I want to be able to access RDP number 1 through Port 4000, but if I access Port 4001 it will connect to the server on port 4001; both using the same subdomain. The next issue is the fact that even though I know what the IP addresses are on the router for the virtualbox hosts through ifconfig it doesn't change the fact that they don't show up on the router. If anyone knows how to configure this to work please help me out because I've been racking my brain to the highest extent I can. Alright; here's an edit to clarify more; Sorry. My ports on the router are edited to forward Port 4000 on Internal IP 192.168.1.63 (My Ubuntu Internal IP address) Now when I go to my Router Home Page my VirtualBox Internal IP Address doesn't show on the attached device listings, so I set up port forwarding anyways to the VirtualBox Internal IP. My end goal is when I connect to mydomain.org and I connect through port 3389 it takes me to my host computers server, but if I put in mydomain.org and go through port 4000 it's going to redirect to my VirtualBox server; Is this even possible? Sorry; I'm trying to clarify the most I think I can I just don't know how else to explain my issue.

    Read the article

  • How can I print from my lion mac mini to my windows XP, with simple file sharing?

    - by Jules
    I have quite a complicated setup, perhaps. And a lot of history on this issue, I'm hoping that I don't have to buy a new printer. I've got a HP Wireless USB Print Server, which requires client software, I can't just use it as an IP Printer. The HP software is pretty poor on the mac and is no longer supported and often locks up the printer server and takes some considerable effort to actually print something. Let alone if a windows machine attaches to it first. My printer is an Epson Stylus R285. However, the windows client software is fine and we can print from windows 7 / XP without problem. We have simple file sharing setup as this is the only way I could get windows XP to talk to windows 7. However, I can't seem to get my mac mini to connect as anything other than a guest to my xp machine, to connect to the shared printer. I'm not considering some kind of internet printing as this would seems the simplest solution. But I'm not sure what will work with my setup ?

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • CUPS printer on Vritual Machine can be access via CUPS admin, but not by XP?

    - by SJaguar13
    I have a Zebra label printer connected to a Linux Mint virtual machine. It was set up with CUPS and a Windows XP computer can then print to it via http://192.168.1.76:632/printers/labelprinter. That was all fine and dandy I then hooked up a Fargo Pro L PVC card printer to a Windows XP virtual machine. I had to disconnect the label printer as the server that hosted both virtual machines only has 1 parallel port. Now I plugged in the Zebra again, and it cannot print from the Windows XP computer anymore. If I go to the CUPS admin panel on the Windows XP computer, I can see it, everything looks fine, and I can send it a test page to print which works. If I try to print from Windows, I get an error that the printer is not found/cannot connect to the server. The only other thing that changed was the firewall on the router to allow remote desktop to another computer from outside the network, but all the firewall stuff was for external use. Nothing affected the IP address of the internal network. The Linux Mint VM also had a PDF pritner that was shared with CUPS. That printer is also down. I tried setting up a new CUPS installation on another VM, and when I go to share it with XP, I get the same error. I don't know what to try. It has access, it can get to the admin from that computer, it seems to be up and ready, but when Windows tries to connect, the printer isn't found even though 4 days ago everything was fine. Any ideas?

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Access NFS share from cygwin?

    - by Jason Voegele
    We have a Windows 2003 Server on which we have installed Microsoft's Services for UNIX, and we have mounted a few NFS shares that contain shared resources that we need to access from this box. When I log in to this server with remote desktop, I am able to browse the contents of the NFS shares and everything works fine. However, one use case that we have is that we need to access this server using SSH, and still be able to access the NFS shares. We are running the Cygwin SSH daemon to provide SSH access to the server, but for some reason when we log in to the Windows 2003 server using SSH we can no longer access the NFS shares. To demonstrate, here is the output of the 'mount' command, first from a Cygwin shell when logged in with remote desktop: $ mount C:/cygwin/bin on /usr/bin type ntfs (binary,auto) C:/cygwin/lib on /usr/lib type ntfs (binary,auto) C:/cygwin on / type ntfs (binary,auto) C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) O: on /cygdrive/o type nfs (binary,posix=0,user,noumount,auto) P: on /cygdrive/p type nfs (binary,posix=0,user,noumount,auto) Z: on /cygdrive/z type nfs (binary,posix=0,user,noumount,auto) And now, the same 'mount' command when logged in with SSH: $ mount C:/cygwin/bin on /usr/bin type ntfs (binary,auto) C:/cygwin/lib on /usr/lib type ntfs (binary,auto) C:/cygwin on / type ntfs (binary,auto) C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) Notice the missing O: P: and Z: NFS shares in the latter. Can anyone tell me why I am unable to see these NFS shares when logged in with SSH? Thanks!

    Read the article

  • LAMP Setup, PHP's session_start permission denied

    - by Andrew
    I'm trying to set up a development environment for a legacy system that runs CentOS 4.8, PHP 4.3.9, and MySQL 4.1.22. I'm matching OS and software versions to keep the development server as close to the production server as possible. When I fire up PHPMyAdmin's setup script (version 2.11.10.1, of course) the installation errors out and I see these errors in my error log: [client 172.18.141.74] PHP Warning: session_start(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in /home/www/intranet/phpmyadmin/libraries/session.inc.php on line 87 [client 172.18.141.74] PHP Warning: Unknown(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [client 172.18.141.74] PHP Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0 I've done some searching on ServerFault and on teh Googles and I see that a common reason for this error is that the session.save_path isn't writable by the www user. I also found where in /etc/php.ini this URL is set: session.save_path. My session.save_path is set to: session.save_path = /var/lib/php/session I've since changed the owner and the group of /var/lib/php/session and still have the same error. Here's the result of ls -la for /var/lib/php [root@localhost php]# ls -la total 24 drwxrwxr-x 3 www www 4096 Oct 23 20:21 . drwxr-xr-x 17 root root 4096 Oct 23 20:31 .. drwxrwx--- 2 www www 4096 Jun 1 2009 session ...But I'm still getting the same error. Is there another possibility for why I'm getting this error?

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • What Media Extender / Centre Set up should I use?

    - by Bryn Hird
    I have installed cat6 throughout the house which I use for telephony and network. In my cellar I have a NAS Server, gigabit switch and I want to install a Media Centre to stream my video's, music, photo's and live TV (coax from the aerial to the cellar) over the cat6. Yeah I know I can get stuff on the internet but shared experience of watching TV as a family as it happens is a big plus for live TV. I'm aiming for 1080p. I want different users to be able to watch different channels. Max users = 4. I've played a little with Windows Media Centre, works fine with live TV. Likewise I have XBMC up and running with live TV. The issue I have is what do I put near the TV. I'd like a consistent user interface (grandma and the the other technophobes in the house are continually pestering me on how to use different TVs, change channel, inputs etc.) so a key part of this for me is to make the user experience the same and simple i.e. no keyboards / PCs hanging around the TV. I've just bought a Linksys DMA 2200 to test the Windows Media Centre, but obviously off eBay as they're a dying breed. And with Windows Media Centre removed from Microsoft plans such devices will get rarer. And as for 1080p, think I can forget it with that set up. I have tested XBOX 360, also works but ditto on Microsoft plans for WMC. I was thinking of a WD Live TV to test the XMBC setup. Now to the question. Any advice on Media Centre / Extender setups that will do the job as above and have some degree of futureproofing (building my own with my Raspberry PI is a last resort). I'd like to understand the standards involved in the futureproofing if anyone knows (DNLA, RVU etc.).

    Read the article

  • How to set up daisy-chained routers for separate sub-nets?

    - by joe
    This question seems to be similar to others, but I'll take a shot anyway. A client recently switched ISPs from TDS to Comcast Business Class. Before the switch, they had 5 static IP addresses assigned. Now they'll have a single IP address that will change whenever Comcast decides to do so. The issue is that this internet connection will be shared among two companies, both having (and wanting to keep) their own private subnets. Because TDS was supplying multiple IP addresses to the one location, this allowed me to put each router on the switch. Now, with Comcast, they only get one IP address, meaning there has to be a main router before the subnet routers. Luckily, the cable modem has a built-in router, which I would like to connect to each company's router, and still have DHCP enabled on all accounts. Question: What do I need to do to the subnet routers to keep them separate from each other, but still allow internet access from the main router. I would love to say "I tried this", and give you links, but everything I find on the internet only mentions daisy-chaining routers with DCHP disabled.

    Read the article

  • Exchange 2010 Hub Transport Role Fails - Registry Keys Missing?

    - by DKNUCKLES
    I've inherited an attempted Exchange 2010 implementation from a colleague that apparently failed. I've almost managed to bring it back from the dead, but the Hub Transport role fails to install with the following error [10/06/2012 02:30:44.0119] [2] Beginning processing Set-LocalPermissions -Feature:'Bridgehead' [10/06/2012 02:30:44.0166] [2] [ERROR] Unexpected Error [10/06/2012 02:30:44.0166] [2] [ERROR] The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". [10/06/2012 02:30:44.0182] [2] Ending processing Set-LocalPermissions [10/06/2012 02:30:44.0182] [1] The following 1 error(s) occurred during task execution: [10/06/2012 02:30:44.0182] [1] 0. ErrorRecord: The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". [10/06/2012 02:30:44.0182] [1] 0. ErrorRecord: System.ArgumentException: The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.GetTargetRegistryKey(XmlNode targetNode) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.ChangePermissions[TTarget,TSecurity,TAccessRule,TRights](XmlNode targetNode, Dictionary`2 rightsDictionary, GetTarget`1 getTarget, GetOrginalPermissionsOnTarget`2 getOrginalPermissionsOnTarget, SetPermissionsOnTarget`2 setPermissionsOnTarget, CreateAccessRule`2 createAccessRule, AddAccessRule`2 addAccessRule, RemoveAccessRuleAll`1 removeAccessRuleAll) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.SetPermissionsOnCurrentLevel[TTarget,TSecurity,TAccessRule,TRights](XmlNode permissionSetNode, String targetType, Dictionary`2 rightsDictionary, GetTarget`1 getTarget, GetOrginalPermissionsOnTarget`2 getOrginalPermissionsOnTarget, SetPermissionsOnTarget`2 setPermissionsOnTarget, CreateAccessRule`2 createAccessRule, AddAccessRule`2 addAccessRule, RemoveAccessRuleAll`1 removeAccessRuleAll) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.SetPermissionsOnCurrentLevel(XmlNode permissionSetNode) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.SetFeaturePermissions(String feature) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.InternalProcessRecord() [10/06/2012 02:30:44.0197] [1] [ERROR] The following error was generated when "$error.Clear(); Set-LocalPermissions -Feature:"Bridgehead" " was run: "The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE".". [10/06/2012 02:30:44.0197] [1] [ERROR] The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". [10/06/2012 02:30:44.0197] [1] [ERROR-REFERENCE] Id=BridgeheadLocalPermissionsComponent___2e2dbc2a97cb4429bc2074edc50bedbd Component=EXCHANGE14:\Current\Release\Shared\Datacenter\Setup [10/06/2012 02:30:44.0197] [1] Setup is stopping now because of one or more critical errors. [10/06/2012 02:30:44.0197] [1] Finished executing component tasks. [10/06/2012 02:30:44.0244] [1] Ending processing Install-BridgeheadRole I've been unable to find any documentation on how to resolve this issue. Any help would be appreciated.

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Mapped network drive connection timeout

    - by Terix
    I have server "Alpha" and server "Beta". Server Beta has a shared folder, that is mapped on server Alpha as "X:" On server Alpha there is a .vbs script that runs and take some files on local drive and copy them on X: drive. My issue is if no user log on server Alpha for a long time, it seems like the tcp connection underneath the mapped drive has a timeout, and the vbs script fails on the copy of file. As soon I log with remote desktop on Alpha server, the .vbs is successfull on the copy of the files. I have made many tests, using file logs to check what was happening and I found no way to refresh the connection and let the .vbs be able to copy files unattended.. I have always to log with remote desktop on Alpha server to refresh the connection and let the .vbs copy the files without issues. What can I do to avoid to log every time? The .vbs script runs 3 times a day and is very annoying. I do not have control over server Beta so I cannot change anything there, and I am very limited on changes I can do on server Alpha ( I cannot change registry and that sort of things)

    Read the article

  • Delay init from starting a service for a period of time?

    - by Matthew
    I am trying to get a rudimentary NFS server up and running. Right now the server is configured as an NFS server due to a workaround for a vendor issue not supporting direct attached clustered storage, which we are trying to get them to resolve. The vendor software is Splunk. The splunk feature we are using requires files be located on shared storage (which for us is /mnt/nfs until they support a real clustered filesystem). Currently the server has a GFS2 filesystem mounted at bootup (it is the only server with the filesystem actively mounted so there should be no problems with locking). We went with GFS2 so switching over to a clustered filesystem is easy should the vendor begin supporting it. NFS is configured to mount that filesystem at /mnt/nfs, which the splunk installation than sees. Splunk is configured to find it's configuration files in /mnt/nfs. However, I am running into a problem where the splunk daemon starts before nfs is finished loading, and because it sees nothing at /mnt/nfs it starts creating files there, and then when the files disappear (nfs finishes mounting the share), splunk craps out. Splunk is set to run at runlevel 3, S90. NFS is set at runlevels 2-5, S60. Is there any way to delay the startup of the splunk process further?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >