Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 502/609 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • Touchpad scroll slow and jumpy

    - by IR
    I have a laptop with a synaptics touchpad running on win7 x64. When i use the scrolling region of the touchpad in some applications, for example in Visual Studio 2008, Notepad and Windows Media Player 12, the scroll is very slow. If i pull the edge of the touchpad slowly the program will scroll one row at a time(regardless of the number of lines-settings in the mouse configuration). If i pull the edge quickly though, the program will instantly jump like 20 rows making it way too fast. In some applications, like Firefox, the scrolling work as expected. Changing the scrollspeed-setting for the touchpad does not help. If you make it slower it doesn't do the 20-row jump but then it's horribly slow and if you try to make it faster it will do the jumps all the time. I have tried both synaptics generic drivers and the "special" drivers that HP provides but they both have the same problem (except that the generic one can't adjust the scrolling speed, even though that didn't help anyway). With windows generic drivers the scrolling region doesn't even work. Other mice i've tried with scrollwheel work as they should do.

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Plesk command working in manual script, not in cronjob

    - by dsaunier
    Hi, In order to install a hosting plan, I use Plesk's commands in SSH as specified in their official guide. When typed directly in SSH (Putty), it works perfectly. The line is as follows with obviously values hard coded when in CLI: /usr/local/psa/bin/domain --create '.$url.' -owner mynamehere -ip '.IP_SERVER_PLESK.' -status enabled -hosting true -hst_type phys -login '.$ftp_user.' -passwd '.$ftp_pw.' -www false -php true -php_safe_mode false -hard_quota 100M I then put that request in a php script that does other things after hosting is installed. Now for the weird part: when calling that script from CLI it also works fine, I do a ./myscript.php and it installs the hosting, then sends emails etc. However after I create a cronjob to have that same script called regularly, then the Plesk command fails. The cronjob is started in Plesk as */15 * * * * /usr/bin/php /home/scripts/myscript.php and it works fine for everything BUT the Plesk hosting install, that returns "Unable to read Control Panel configuration file" and therefore does not install the domain hosting. Still this is the same script that I call manually ! On that server are the PHP used to call a cronjob and the one used in CLI different ? What do I miss, help greatly appreciated ! Regards.

    Read the article

  • Trying to diagnose network problem: ping 127.0.0.1 (or any address) results in error code 1

    - by Mnebuerquo
    NIC seems to be working, as windows detects the hardware and has a driver and reports success. DHCP seems to have gotten an ip address, 192.168.1.101. I released and renewed it and it seemed to work normally. I tried ping 127.0.0.1 as first step of testing network configuration. Pinging 127.0.0.1 with 32 bytes of data: PING: transmit failed, error code 1. I read somewhere that net helpmsg [error code] would give a human readable name for the error code. net helpmsg 1 says "Incorrect function" I've tried disabling the firewall and antivirus in McAfee SecurityCenter and I still get the same error. Could the firewall/antivirus be breaking it even if disabled? Broadcom Advanced Control Suite 2 is installed, and its network test passes all tests, including ping 192.168.1.1 which is the default gateway. If I try ping 192.168.1.1 from the command prompt I get the error code 1 again. So does anyone have any theories that would explain this problem? Other tests I should try? Thanks!

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Apache form authentication issues

    - by rfcoder89
    I am trying to authenticate users through Apache using the form authentication method to restrict https requests to a certain folder. Although, regardless of whether the correct login details are provided it keeps reloading the same page except the url has the form values embedded in it instead of redirecting to the appropriate page. I need to use the form authentication type instead of basic so I can write my own html for the user to login. I am using Apache 2.4.9 and this is our current configuration. Apache config file <Location C:/wamp/www/directory> SetHandler form-login-handler AuthFormLoginRequiredLocation https://localNetwork.com/username/TestBed/HTML/login.html AuthFormLoginSuccessLocation https://localNetwork.com/username/TestBed/HTML/test.html AuthFormProvider file AuthUserFile "C:/wamp/passwords" AuthType form AuthName realm Session On SessionCookieName session path=/ SessionCryptoPassphrase secret </Location> And in the login html page I've added that for the user to login <form method="POST" action="/test.html"> User: <input type="text" name="httpd_username" value="" /> Pass: <input type="password" name="httpd_password" value="" /> <input type="submit" name="login" value="Login" /> </form>

    Read the article

  • Script apparently changing file permissions on Mac OS to 000

    - by half_bit
    I wrote a little shellscript that helps installing a web application. The script itself just downloads a zip archive, extracts it and changes the permissions of the extracted files to the one needed to run the webapp. The problem now is that some users reported that after running my script, all the permissions of every file in their home directory or even on their whole computer changed to 000 (except the actual unzipped files which do have the correct permissions). The only lines in my script actually doing IO are these: URL="http://foo.com/" FILENAME="some.zip" curl --silent "$URL$FILENAME" -o $FILENAME > /dev/null echo "Unzipping...\c" if unzip -oqq $FILENAME > /dev/null then chmod -R 777 app/tmp app/webroot app/Config/database* app/configuration* chown -R www:www * rm $FILENAME echo "\t\t\tOK" exit 0 else echo "\t\t\tERROR" exit 1 fi I seriously can't explain this to myself. How can this even be possible? It is entirely possible that the users accidentally ran the script in their home directory, but that still wouldn't explain why the permissions where set to 000, not www/777.

    Read the article

  • Route all traffic of home network through VPN [migrated]

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • Connect from Mac OS X to Windows 7 Desktop

    - by jrn
    I am trying to connect from my MacBook to my Windows 7 machine within my own network - if it will work from outside my network that's a plus but no need to have. My Windows 7 machine is freshly installed with Windows 7 Home Premium. It runs the built-in firewall with no settings changed so far as well as Microsoft Security Essentials. So far I tried CoRD and Microsofts Remote Desktop Connections to connect from my Mac to my Windows machine without any success. I did try and disabled the firewall on my Windows machine but could not connect either. The reason I did this was to check wether there is a Windows firewall setting preventing me from connecting. On top of that I manually started the Remote Desktop Services and Remote Desktop Configuration within services.msc. Is there anything else I have to enable for a remote desktop connection? Could there be any router setting I have to tweak? Since I do not want to connect from outside my own network I thought I don't have to do any port forwarding. The error messages I retrieve are all connection timeouts. I can however ping the hostname and/or IP address. Any help would be greatly appreciated. Thanks a lot, jrn

    Read the article

  • What apps can you only get on Mac and not Windows?

    - by ytk
    What apps do you absolutely have to use a Mac to run, and there are no decent Windows PC equivalent? This is not a religious war. Please be specific and practical It doesn't have to be a direct 1-2-1 comparison, but overall usefulness to the task I'll start off with a few: KeyNote -- the animations are quite cool and not available in PowerPoint iTune's photo sync -- on Windows it makes copy of all the photos you want to sync, effectively double the space taken up by your photos. On a Mac it's easier as long as you use iPhoto Keychain -- a centralized password manager tied to the OS. The benefit of this is you don't have to set a Master Password (like Firefox) which you need to enter when starting the browser. And it doesn't reveal your password (like Chrome, which makes no effort in hiding the password you have stored in Options) Time Machine -- 0-configuration backup in the background. Easy interface for restoring a file, or even just a contact in the address book. Text-to-speech -- works in any program, and sounds better than Windows computer voice Quick View -- press space bar to preview a file. Windows95 had quick view, but was removed.

    Read the article

  • Incoming traffic while on public network

    - by zvikico
    I'm developing a web app and I need to be able to get incoming traffic from 3rd party services I use. This is a classic webhooks situation: I send a request with a return address and receive the response (via HTTP) some time later to the given address. The simple solution would be to provide my external IP address and forward the incoming traffic from the router to my machine. However, I'm working in a large office and I cannot control the router configuration. I'm looking for a different way to achieve that. I do have servers online. I can have a daemon running on one of those servers, which will handle the incoming traffic. I can run a parallel daemon on my machine, which will keep an open connection with the remote daemon (over ssh preferred) and when an inbound traffic is received by the remote, it will send it to the local, which will send it to the correct port on my machine, as if it was received in the natural way. Is there any ready-made solution for that? PS. I'm on OS X and my server is Ubuntu. Thanks, zvikico

    Read the article

  • p410i Mirror failed couldnt find same disk

    - by Heishiro Mitsurugi
    I have an HP server with an P410i RAID Card installed. I had two SATA Drives connected (250GB each). The RAID was configured as a Mirror. A few days ago the drive one (1) failed, and i had to remove it. Tried to find the same part number here in Venezuela, but i couldn't. So, i bought a 500GB SATA Drive, and connected it to the same bay where the 250GB failed drive was. When the server booted, it asked me if i wanted to rebuild the data. I selected the option for that, and Windows Server restarted properly. When i got into the ACU (Array Configuration Utility) it told me that it was rebuilding the data. Today the warning went away, and according to the ACU everything is just fine. My question is... What i did was right? Can i create a mirror from a 250GB disk in a 500GB disk using the p410i? I have done that before, but only using software RAID in Windows, and it just uses the space it needs. As a matter of fact, when did that using Windows i was able to use the remaining space in the bigger drive, but in the p410i i can't use it. Should i be worried? Thanks a lot in advance for any pointers or info that you could give on this. Heishiro

    Read the article

  • apache front-end rewriting URL to different https ports?

    - by khedron
    Hi all, One of my users is having some trouble with forwarding to an internal web app from a public address. Everything worked fine for him when the situation was like this: front page: http://www.myexample.com/ public ref to internal app: http://www.example.com/app-8903/app.html secretly goes to: http://secret.example.com:8903/app-8903/app.html This is to say, my user is providing the very last URL, with the port information duplicated in the URL base, and they were using that to give a public face that hid both the port and the internal machine name. You could still read the port in the URL base if you looked, but the obvious reference and machine name were hidden. Doing it this way, he could have several different instances of the application running on secret.example.com with different ports, and on the front end it just looked like it was changing the URL directory/base. Now the user wants to do the same thing over https:, and the people helping him with apache config say it can't be done. Is that so? Without being there to tinker with the configuration myself, I'm not sure what his IT people have tried, but reading through the apache2 SSL FAQ and other docs, it seems like it should be possible to rewrite URLs to different ports and still use https:.

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • Windows-7 Ultimate 64 bit wont connect to my wired/wireless networks

    - by A302
    Windows 7 Ultimate 64 bit. Everything was working fine & then just stopped working. The nic card Realtek PCIe GBE Family Controller is enabled but does not connect to my router (cables & router ports are good). Wireless Atheros AR5007EG is enabled but the connection is limited (encryption type / key have been verified). A laptop running XP can connect both wired / wireless. SSID is not being broadcast, connect to network if it is not broadcasting is checked. Have checked services.msc for Bonjour & did not see it listed. Network & sharing center does not list any active networks. Device manager lists both devices as functioning properly. Router configuration has not been changed. Virus scan has not found anything. I would like to fix this rather than using Acronis to do a system restore. Thanks in advance for any advice offered in solving this. 26 Jan, the nic card & wireless are working using PCLinux OS Live CD. It appears that the problem is Windows 7 related.

    Read the article

  • Nginx and 1000 WordPress Installs - Optimization

    - by GTE
    Hey, I'm trying to create a rather unusual (imo) configuration where I have: nginx php-fastcgi mysql 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a seperate subdomain. Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog. This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag. 1) Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.) 2) Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one? 3) Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated. Thanks in advance

    Read the article

  • Apache mod_disk_cache on a seperate drive

    - by pavs
    how can I set Apache mod_disk_cache on a separate drive from where the OS/Apache is installed? I have set up this on my apache2.conf: <IfModule mod_cache_disk.c> # cache cleaning is done by htcacheclean, which can be configured in # /etc/default/apache2 # # For further information, see the comments in that file, # /usr/share/doc/apache2/README.Debian, and the htcacheclean(8) # man page. # This path must be the same as the one in /etc/default/apache2 CacheRoot /media/cacheHD # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / # The result of CacheDirLevels * CacheDirLength must not be higher than # 20. Moreover, pay attention on file system limits. Some file systems # do not support more than a certain number of inodes and # subdirectories (e.g. 32000 for ext3) CacheDirLevels 2 CacheDirLength 1 </IfModule> and it doesn't seem to be caching anything at all. The drive itself is a freshly installed ssd drive formatted with ext4.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Connection reset to some websites

    - by user143271
    I'm using a 2wire 3600HGV modem/router. Starting around this afternoon, any time I try to access anything from i.imgur.com I get The connection to i.imgur.com was interrupted in chrome, and the actual error is Error 101 (net::ERR_CONNECTION_RESET). It's network wide (tested multiple browsers on multiple computers and phones). I can access imgur.com just fine, but nothing from its content server i.imgur.com. If I disable wifi on my phone and use its 4G connection, I can access it just fine, so obviously imgur isn't down. I haven't changed any configuration on the router, and I have tried changing DNS servers (I tried google and OpenDNS). It also seems that imgur is not the only site; howtogeek and a couple of others seem to have the same problem. It looks like they are all edgecast cdn content servers, but not all edgecast cdn servers fail. Tumblr, for instance, works just fine. Does anyone have any idea what would be causing this? EDIT: Related to the edgecast remark, it would appear that this is a specific edgecast server: gs1.wpc.edgecastcdn.net. Tumbler's content is on gs1.wac.edgecastcdn.net, so it might be on a different server. Edit #2: These sites all respond to ping just fine as well.

    Read the article

  • Windows 7 clean install becomes corrupt after reboot (repeated many fresh installs)

    - by pjotr_dolphin
    My laptop keeps crashing on boot after clean Windows 7 install. Ok, here is the story, and some fact. Computer: Samsung NP900X3C-A04HK (256GB SSD, 8GB RAM) OS to install: Windows 7 Ultimate SP1 (not from Samsung, own fresh Win) I purchased this laptop about a year ago, never booted it into the Windows Home that was installed on it, installed directly Ubuntu on the machine. Full disc encryption was the selected install, so of course it wiped the complete disc (including Samsung Recovery Partition). After some time, I felt like going back to Windows, as Windows 7 is actually quite nice. So I went to buy a fresh Windows 7Ultimate with SP1. Now to the tricky part. Windows installs perfectly, and after installing all Windows updates, drivers from Samsung, software I need, it is time for shutting it down and go to bed. Starting it up again, and it is not booting, these are the type of errors I have gotten so far (fresh installed it more then a dozen times now, and tried different suggestions from threads on the net). Windows failed to start... Status: 0xc000000f Info: The boot selection failed because a required device is inaccessible. File: /boot/bcd Status: 0xc000000f Info: an error occurred while attempting to read the boot configuration data. And some other errors, not all the same. Not memory of this. I have run different disc checks, and all says my SSD is in perfect shape. Note: Soft reboots from Windows menu works, never gets corrupted. But if I Shutdown and then start it up again, this is when it happens. Can someone help me not get back to Ubunut? What can be the cause, and how can it be fixed so I do not get there problems again?

    Read the article

  • WebSphere hung threads, how can I track then down?

    - by Puzzled
    We have an application running on WebSphere (unfortunately it is 6.1 which is no longer supported, it has not yet been migrated in production to a later version) which becomes entirely unresponsive because of hung threads. As far as I can tell we entirely exhaust one of the thread pools. I have activated hung thread detection and I get a core/thread dump when hung threads are detected. The server can run for several days without problems but has crashed twice this week. When load the core/thread dump in "IBM Thread and Monitor Dump Analyzer for Java", it tells me that there are a certain number of hung threads (this time it was 2, last time 11) and multiple (usually around 40) threads "waiting on condition" and some running threads. I believe one of the thread pool has around that size (50). Now what I see in there are threads waiting for locks, having locks or in wait. Most of them show a stack track which always ends like this: at java/lang/Object.wait(Native Method) at java/lang/Object.wait(Object.java:231) Now, how can I track this down to either a server configuration problem, application issue, WebSphere problem or something else? How is this supposed to help me track down the problem when almost everything in there refers to IBM code? I cannot ask IBM's help as 6.1 is now an unsupported version of WebSphere and while work has been done to make it work under WebSphere 7 we are not yet ready to switch to it in Production yet.

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >