Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 938/1042 | < Previous Page | 934 935 936 937 938 939 940 941 942 943 944 945  | Next Page >

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • Passenger not booting Rails App

    - by firecall
    I'm at the end of ability, so time to ask for help. My hosting company are moving me to a new server. I've got my own VPS. It's a fresh CentOS 5 install with Plesk 9.5.2 Essentially Passenger just doesnt seem to be booting the Rails app. It's like it doesnt see it's a Rails app to be booted. I've got Rails 3.0 install with Ruby 1.9.2 built from source. I can run Bundle Install and that works. I've currently got Passenger 3 RC1 installed as per here, but have tried v2 as well. My conf/vhost.conf file looks like this: DocumentRoot /var/www/vhosts/foosite.com.au/httpdocs/public/ RackEnv development #Options Indexes I've got a /etc/httpd/conf.d/passenger.conf file which looks like this: LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0.pre4/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0.pre4 PassengerRuby /usr/local/bin/ruby PassengerLogLevel 2 and all I get is a 403 forbidden or the directory listing if I enable Indexes. I dont know what else to do! Yikes. There's nothing in the Apache error log that I can see. The new server admin isnt much help as I think he's a bit junior and says he doesnt know about Rails... sigh :/ I'm a programmer and server admin isnt my bag :(

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Does Windows XP automatically reassemble UDP fragments?

    - by Matt Davis
    I've got a Windows application that receives and processes XML messages transmitted via UDP. The application collects the data using Windows "raw" sockets, so the entire layer 3 packet is visible. We've recently run across a problem that has me stumped. If the XML messages (i.e., UDP packets) are large (i.e., 1500 bytes), they get fragmented as expected. Ordinarily, this will cause the XML processor to fail because it attempts to process each UDP packet as if it is a complete XML message. This is a known short-coming in the system at this stage of its development. On Windows 7, this is exactly what happens. The fragments are received and logged, but no processing occurs. On Windows XP, however, the same fragments are seen, and the XML processor seems to handle everything just fine. Does Windows XP automatically reassemble UDP fragments? I guess I could expect this for a normal UDP socket, but it's not expected behavior for a "raw" socket, IMO. Further, if this is the case on Windows XP, why isn't the behavior the same on Windows 7? Is there a way to enable this?

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • Repairing Damage to VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • SQL Server 2008 Remote Access

    - by Timothy Strimple
    I'm having problems connecting to my SQL Server 2008 database from my computer. I have enabled remote connections as described in this answer (http://serverfault.com/questions/7798/how-to-enable-remote-connections-for-sql-server-2008). And I have added the ports listed on the microsoft support page to our Cisco Asa firewall and I'm still unable to connect. The error I'm getting from the SQL Management Studio is: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) Once again, I have double and triple checked that remote connections are enabled under the database properties and that TCP is enabled on the configuration page. I've added tcp ports 135, 1433, 1434, 2382, 2383, and 4022 as well as udp 1434 to the firewall. I've also checked to make sure that 1433 is the static port that is set in the tcp section of the database server configuration. The ports should be configured correctly in the firewall since http/https and rdp are all working and the sql server ports are setup the same way. What am I missing here? Any help you could offer would be greatly appreciated. Edit: I can connect to the server via TCP on the internal network. The servers are colocated in a datacenter and I can connect from my production box to my development box and vice versa. To me that indicates a firewall issue, but I've no idea what else to open. I've even tried allowing all tcp ports to that server without success.

    Read the article

  • Poor gaming performance with hyper-v installed in windows 8

    - by SnowCrash
    I am getting very poor gaming performance on my Windows 8 host OS with Hyper-V installed but no guest machines running. For example World of Tanks reports 60-70 FPS without Hyper-V installed and 4-14 FPS with it installed. A similar, dramatic, hit is observed in several other games so the issue is not WoT specific. To make the point clear, I am not trying to run games in a virtual machine. I don't even have a VM running while observing this effect. I simply have the Hyper-V feature installed. My system specs: AMD Phenom II 965 (3.4 GHz) AMD Radeon 6950 2GB (XFX Double D HD-695X-CDFC) 16GB DDR3 1333 AMD 790GX chipset Mainboard (Gigabyte GA-MA790GPT-UD3H) I have tried every AMD driver from 12.8 to the current 12.11beta8, virtualization is enabled in the BIOS settings, the onboard 3300HD video device is disabled in BIOS and I have read the MSDN blog entry here regarding a similar issue in Server 2008 that was resolved in 2008 R2 (and hopefully not regressed in Win 8). I'd like to be able to use Hyper-V for development and testing at home (I am a sysadmin/software developer professionally). If, however, I can't also use my home system for entertainment I'll have to scrap those plans.

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • Why is /usr/bin/env permission denied to rails server?

    - by Eric Hopkins
    I've just set up rails on an apache server running on Ubuntu, and when I try to go to the root page it gives this error: /usr/bin/env: bash: Permission denied env and all the directories in the path all have permissions 755. I tried setting env to have permissions 777 but still got the same error. Rails is running as "nobody". Why is this happening? I don't know what else to try. In /etc/apache2/sites-available/api.conf: <VirtualHost *:80> ServerName api.thinknation.ca ServerAlias api.thinknation.ca DocumentRoot /var/www/api/public ErrorLog /var/www/logs/error.log CustomLog /var/www/logs/access.log combined RailsSpawnMethod smart <Directory /var/www/api/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews -Indexes # Uncomment this if you're on Apache >= 2.4: Order allow,deny Allow from all #Require all granted </Directory> </VirtualHost> From config/database.yml in my rails directory (with sensitive user names and passwords omitted): default: &default adapter: mysql2 encoding: utf8 pool: 5 username: root password: socket: /var/run/mysqld/mysqld.sock development: <<: *default database: api_development test: <<: *default database: api_test production: <<: *default url: <%= ENV['DATABASE_URL'] %> database: api username: ------------ password: ------------ Not sure what other details or files are relevant, I will add them if needed.

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • Authenticating Linked Servers - SQL Server 8 to SQL Server 10

    - by jp2code
    We have an old SQL Server 2000 database that has to be kept because it is needed on our manufacturing machines. It also maintains our employee records, since they are needed on these machines for employee logins. We also have a newer SQL Server 10 database (I think this is 2008, but I'm not sure) that we are using for newer development. I have recently learned (i.e. today) that I can link the two servers. This would allow me to access the employee tables in the newer server. Following the SF post SQL Server to SQL Server Linked Server Setup, I tried adding the link. In our SQL Server 2000 machine, I got this error: Similarly, on our SQL Server 10 machine, I got this error: The messages, though worded different, probably say the same thing: I need to authenticate, somehow. We have an Active Directory, but it is on yet another server. What, exactly, should be done here? A guy HERE<< said to check the Security settings, but did not say what else to do. Both servers are set to SQL Server and Windows Authentication mode. Now what?

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • Setting up CIFS ISO Repository for Xen

    - by user85610
    I recently started working with Xen, to try to make better use of an extra desktop box for development testing. I'd like to be able to do OS installs on it without having to burn discs, but I'm having some trouble actually being able to get it to boot OS ISOs from a Windows share. My Windows box is running Win 7, and it's on a domain. I created a CIFS ISO SR in Xen, specifying the correct username and password to use. Xen is able to scan the share, and I see the ISOs that are in the folder, and can select them in the list in XenCenter. However, when I try to start the VM, I get "Error: Starting VM 'linxcentos' - INVALID_SOURCE - Unable to access a required file in the specified repository: file:///tmp/cdrom-repo-hIz-H7/isolinux/vmlinuz." I tried booting a different Linux ISO and got the same result. I know that the ISOs are valid because I was able to install them without issue when I tried VMWare ESXi earlier. What am I missing here? It's Xen/XenCenter 6 and I'm trying to install the newest version of Centos. I may end up burning it for now, but I'd like to get this to work, at least just for the principle of not letting mysterious behaviors go unsolved...

    Read the article

  • Are Windows Domain Service Accounts Really Necessary?

    - by Zach Bonham
    One of the biggest problems we have in automating application deployments is the idea that running IIS AppPools and Windows Services under domain service accounts is a 'best practice'. Unfortunately, this best practice sometimes causes deployment headaches in that either we need to provision a new domain level service account quickly, or once we have the account, we now need to manage the account credentials. I had a great conversation about not making domain level service accounts a requirement and effectively taking one of two approaches: Secure at the node level using machine account(domain\machine$) and add the node to appropriate ActiveDirectory/Sql groups/roles Create local app specific accounts on each machine (machine\myapp) and add that account to appropriate ActiveDirectory/Sql groups/roles (the password here can change per deployment, it doesn't need to be stored) In both cases, it seems that its easier to manage either adding an account to appropriate group/role, or even stand up new, local account, than it is to have to provision a new domain level account and manage those credentials. This would hopefully ease the management burden on ActiveDirectory, Sql Server and Operations teams as there would be no more password management. We've not actually been able to implement this in practice yet. I am coming from a development background, so I'm curious as to how many ways this approach could go wrong? Can we really get rid of domain level service accounts with this direction? I'd appreciate any thoughts from anyone who has taken this path! Thanks! Zach

    Read the article

  • Why Is Web Sharing Broken on My Mac?

    - by Sam Murray-Sutton
    Background: I use my Mac for web development, running copies of web sites locally. I recently installed the Snow Leopard update, which to all intents and purposes seems to have gone fine, except... What's not working? Web-sharing; more specifically I can't turn it on via preferences. The preference pane just hangs when I try to. So Apache doesn't start on reboot. I can start Apache by hand, but I don't know enough to either setup apache to start with the computer, or to properly fix web sharing. Further details My Apache error log shows nothing on when the system boots up (as I would expect). This is the error message when I try to start web sharing from the sharing preference pane. 28/09/2009 10:58:05 System Preferences[834] setInetDServiceEnabled failed with 1 for org.apache.httpd Here's the messages given when I start apache from the command line. [Mon Sep 28 10:35:53 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Mon Sep 28 10:35:54 2009] [warn] mod_bonjour: Skipping user 'sams' - index file /Users/sams/Sites/index.html has zero length. [Mon Sep 28 10:35:54 2009] [notice] Digest: generating secret for digest authentication ... [Mon Sep 28 10:35:54 2009] [notice] Digest: done [Mon Sep 28 10:35:54 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 Phusion_Passenger/2.2.5 configured -- resuming normal operations Please let me know if you need any further details on this. Any help would be greatly appreciated. UPDATE I have added an answer of my own below - I was able to solve it thanks to being pointed in the right direction by the comments below, so thanks very much. But I'm still not totally clear as to what caused the problem or how my solution addressed it, so I'm leaving the question open for now.

    Read the article

  • Networking lost after update from Debian Wheezy to Jessie

    - by Charaf
    I am currently setting a Virtual Machine for development purposes. I did a big part of this configuration under Wheezy, but I need some debs that were available only on Jessie. So, I've updated the sources.list and did a dist-upgrade. Everything went well, but after the reboot, I noticed that I lost all the networking. Repositories are unreachable, as well as a simple ping google.fr returns nothing. What can I do to quickly restore networking so that I can continue my working. I have a poor connexion and can not afford to download the whole install DVDs. root@vm~# ifconfig lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:65536 Metric 1 RX packets:452 errors:0 dropped:0 overruns:0 frame:0 TX packets:452 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:164238 (160.3 KiB) TX bytes:164238 (160.3 KiB) root@vm~# I am running VMware 1.0.1 build 1379776 and the last update of Jessie (debian 3.14.4-1) Please help. Thanks.

    Read the article

  • Mod_rewrite (CakePHP routing functionality) forbidden after Snow Leopard upgrade

    - by Ryan Ballantyne
    Hello ServerFault, I am using the standard Apple-provided installations of PHP 5.3 and Apache 2 to do web development on a Mac Pro that I just upgraded to Mac OS X 10.6 (Snow Leopard). The upgrade went well enough, if I ignore the fact that it destroyed my ability to get work done. ;) After the update, the CakePHP application I was developing started giving me 403 Forbidden errors when accessed. Based on the errors in the log file, I've determined that Apache is choking on the mod_rewrite rules in Cake's .htaccess file. Here's the file, in its entirety: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> It's not that the rules themselves are wrong, but that Apache is forbidding the use of mod_rewrite altogether. All other pages on the machine work fine, and the 403 errors go away if I comment out the .htaccess file (but nothing works, of course). In my httpd.conf file, I've tried changing this: <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> To this: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> ...which has no effect. I don't know much about Apache configuration files, and I'm quite stuck on this. In fact, I know little enough that I'm not sure which information about my setup is needed to enable people to provide useful answers. I'm just using the vanilla OS X setup, nothing fancy. Googling has yielded no fruits for me this time, so I'm turning to you. Any ideas?

    Read the article

< Previous Page | 934 935 936 937 938 939 940 941 942 943 944 945  | Next Page >