Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 974/1071 | < Previous Page | 970 971 972 973 974 975 976 977 978 979 980 981  | Next Page >

  • Network bandwidth usage dashboard?

    - by SkippyFlipjack
    I have a couple of wifi access points hooked up to my home network, one of which I keep unsecured for some development I do; there are only a couple other homes within range and they've got their own wifi so it's not a big concern. I also have a Sonos system, Tivo, Roku, a couple laptops, a couple phones, an iPad and a desktop machine, all of which are internet-smart. So when my internet bandwidth tanks and it takes five minutes to load a YouTube video, I want to know what's going on, and there are many potential culprits. I'd like to be able to plug my MacBook into the primary router and see a nice little dashboard of the units on the network and what kind of bandwidth each is using at that moment. I could figure this out from WireShark or tcpdump but figure there has to be an easier way. I've tried a few different commercial products but none really presented the right info. Suggestions? (This may be a question for superuser since my Apple Time Capsule's SNMP capabilities are limited, but I figure admins of small business networks would have dealt w/ the same issue..)

    Read the article

  • Apache Virtualhost entry with Windows hostname

    - by gshauger
    I have a Windows Domain Controller and we use it for DNS for our internal network. I have an Ubuntu box with an IP address of 172.16.34.149. Within the Windows DNS I created the forward and reverse lookup entries for the name Endymion. Naturally when ever I FTP/SSH/HTTP/etc to the hostname Endymion it resolves correctly to my Ubuntu box. I wanted to do some web development on this box for an existing site. There were problems when I placed the website in a subfolder of /var/www/. Let's just say it was in folder /var/www/projectx/. The issue involved the incorrect resolution of non-relative urls. So I figure I could create a new DNS entry for the hostname projectx. Sure enough when I FTP/SSH/HTTP/etc to the hostname projectx it takes me to the same ubuntu box as the hostname Endymion...this is what I would expect. I now have two hostnames for the same box. I then create a Virtualhost entry in httpd.conf that looks like the following: <VirtualHost *:80> DocumentRoot /var/www/projectx ServerName projectx ServerAlias projectx </VirtualHost> Sure enough when I go to a browser and type in http://projectx/ it takes me to the correct subfolder. Everything works!!! Not so fast. I then go to http://endymion/ and instead of taking me to /var/www/ it takes me to /var/www/projectx/ Clearly I'm missing something. Help please! ;)

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • Problem running MVC3 app in IIS 7

    - by mjmoore99
    I am having a problem getting a MVC 3 project running in IIS7 on a computer running Windows 7 Home-64 bit. Here is what I did. Installed IIS 7. Accessed the server and got the IIS welcome page. Created a directory named d:\MySite and copied the MVC application to it. (The MVC app is just the standard app that is created when you create a new MVC3 project in visual studio. It just displays a home page and an account logon page. It runs fine inside the Visual Studio development server and I also copied it out to my hosting site and it works fine there) Started IIS management console. Stopped the default site. Added a new site named "MySite" with a physical directory of "d:\Mysite" Changed the application pool named MySite to use .Net Framework 4.0, Integrated pipeline When I access the site in the browser I get a list of the files in the d:\MySite directory. It is as if IIS is not recognizing the contents of d:\MySite as an MVC application. What do I need to do to resolve this?

    Read the article

  • crontab not running on VirtualBox unless I'm logged in

    - by Mike
    I am running Ubuntu Server 9.04 in VirtualBox on my work PC as a development environment. I have some scripts that I've put in my user's crontab that run throughout the day while I'm SSHed into the VM. Last night, I closed PuTTy and all of my other running applications (except for VirtualBox and the VM) and went home. I came back this morning to discover that my cron jobs didn't run at all, yet when I SSHed into the VM, the next scheduled job ran. I set the schedule to 5min to test, disconnected again, and the jobs stopped running on schedule. They seem to only run if I'm logged in to the machine. Obviously, I want them to run on schedule even if I'm not logged in to the VM, otherwise there's no point. Is there something I've failed to configure correctly? New Information: There are now 3 entries in /var/log/cron.log saying the following "Mount of private directory return code [256]"... the entries correspond to when the cron job is supposed to run. I thought they are supposed to run as my userid? Why would my own userid be unable to run a script in my home directory?

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • Windows 8 "Upgrade Offer" eligibilty when running the Consumer Preview in a VM?

    - by Dan Harris
    If I have a VM running Windows 8 Consumer/Release Preview, am I allowed to take advantage of the Windows 8 upgrade offer, and install it on that machine? I would have assumed not...as there was never a licensed version of XP SP3 through to Windows 7 installed in that VM. It was a clean installation of the Consumer Preview into a VM. My confusion comes from the notes at the bottom of the download page for the Upgrade offer which states: Offer valid from October 26, 2012 until January 31, 2013 and is for individuals and small businesses needing to upgrade up to five devices. If you are a business customer looking to upgrade more than five devices to Windows 8 Pro, contact your Microsoft partner for more information. To install Windows 8 Pro, customers must be running Windows XP SP3, Windows Vista, Windows 7, Windows 8 Consumer Preview, or Windows 8 Release Preview. I am assuming it's not possible and i'll need to purchase the System Builder edition to install within a VM? My guess is that you can use your downloaded upgrade offer only if you updated Windows 7 to the release preview, and therefore had the Windows 7 license on the machine, I used the serial number from the Microsoft Website when downloading the Release Preview, and did a clean install, so there was never a Windows 7 license on the VM. I have MSDN for development purposes, but I am looking to run in a VM for personal use as well, so my MSDN license is not valid for that particular use.

    Read the article

  • SQL Server 2008 services error on account

    - by TheDude
    I installed SQL Server Enterprise, but can't get it to work. It is a stand alone, on a laptop for development purposes. No network is involved, no other users. The OS is windows 7. Now, I keep receiving eventId 7000, which means that access is denied for the user (the user was Network Services). So, after reading up on it, I kind of got the idea that a user account should be created with minimal privileges. So, off I went and added a user, SQLservices. In the SQL Server Configuration Manager I right clicked SQL Server(MSSQLSERVER), and in the properties I added my new user. Well, here's mister eventId 7000 again. I don't get what I am doing wrong. Also, this new user ends up on my start-up screen. I don't think I want that... I mean, it would be weird to have x number of users crowding up my start-up screen just because I created those for my windows services... The error I get when I add the user in SQL Server Configuration Manager is as follows: Permission Denied. [0x80070005] Helps!

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • cPanel FTP account access to sym links from parent directory

    - by totbar
    I would like to give a potential developer temporary access to some of my projects. I have almost everything in its own subdomain, and each directory is a sibling to my public_html directory. It looks something like: ("developer" is the cPanel account name.) developer/ *This is the top level directory for the cPanel account. "/home/developer" site1/ *site1.mysite.com site2/ *site2.mysite.com site3/ *site3.mysite.com public_html/ *www.mysite.com ... etc I created a directory inside public_html called tempdev and I added symbolic links to each of the sibling directories listed above. My understanding of cPanel is that I can only assign one user with "Special FTP Access" per domain. I really dont want to give a complete stranger my login creds, (its just a development environment but still). So I used the cPanel FTP account creator UI. It will not allow me to assign the user access to the directories outside of public_html. I cant even give access to public_html either. So I made the tempdev directory in www and created the symlinks. Using the new account, I can see the symlinks, but I can go into them. Is there a better way to accomplish what I am attempting?

    Read the article

  • Router as primary DNS server, Server as alternate? (or vice versa)

    - by Jakobud
    We have a very small business network, with a typical cable modem hooked into a DD-WRT router. We also run a basic CentOS server that does a variety of things, including acting as the primary DNS server for the office. The reason we need an internal DNS server is because we do a lot of internal web development and use the DNS server to add/remove various local network URLs for internal website testing (like www.testsite.com.local). It's very important for us to be able to add/remove URL aliases easily to the DNS. The problem with this setup is that if we ever need to restart the CentOS server or take it offline for upgrades or whatever, then internet access for all computers on the network is lost. That's because each computer relies on that DNS server to access the Internet I guess? The router is online all the time and very very rarely has to be restarted. It would be nice if we could setup my router to be the primary DNS server but still be running DNS on my server. So we could still add my local testing website URLs to the DNS server in CentOS, but be able to also take down the CentOS server without loosing Internet access on the network. How would this be setup? Would I simply need to add both router + server IP addresses to each computer's IP settings? Is the router primary DNS and server secondary DNS server? Or vice versa? Or can one of the two serve as a fallback for the other? What (if anything) needs to be configured on both the router and server in order for them to recognize that the other DNS server exists on the network? Does anyone have any newb-friendly resources for setting up something like this?

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • How and where do you manage your domain names?

    - by Saif Bechan
    In the past several years doing web development I often times needed to buy new domain names. I changed registrars a lot also so over the years I have multiple domain names scattered over different registrars all over the world. Now I want to bring a little structure into my business, and I am at the point that I want to be able to have easy control over my domain names in a convenient way. Does anyone have an idea on what the best way is to give structure on this. I have made some suggestions maybe you can comment on them for me. 1) Just leave it as it is I can leave everything as it is. To make adjustments I have to log into different panels, and for some registrars I have to email the changes. 2) Transfer all the domains to one registar This will cost a lot, about 10 usd per domain name. But if I can find a registar where I have full control over DNS this is worth looking at. Can you give me some comments on how you are doing things now. Maybe also which registrar you prefer on doing things.

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • How to use git to manage one codebase but have different environments

    - by emostar
    I'm using git for a personal project at the moment and have run into a problem of having one codebase for two different environments and was wondering what the cleanest way to use git would be. Main Desktop I Use this machine for most of my development. I have a git repository here that I cloned off of an empty repository that I use on my internal server. I do most of my work here and push back to the internal server so I can use that as a master of truth and to ease making backups. Laptop I sometimes want to code on the road, so I did a clone from the internal server and created a new branch called "laptop-branch". Unfortunately some directories MSVC++ version are different than from the Main Desktop environment. I just modified the files in the "laptop-branch" and committed them there. Now I did a lot of changes while on vacation with my laptop, and want to push them to origin, but don't want the changes I made that were related to directories and compiler versions to be pushed back to origin. What would be the best way to get this done?

    Read the article

  • Is it possible to use different zsh menu selection behaviour for different commands?

    - by kine
    I'm using the menu select behaviour in zsh, which invokes a menu below the cursor where you can see the various possibilities. The .zshrc option i have set for this is zstyle ':completion:*' menu select=2 By default, pressing Return to select a possibility in this menu only completes the word — it does not actually send the command. For example, I might get a menu like this ~ % cd de<TAB> completing directory: [Desktop/] Development/ Pressing Return here will result in ~ % cd Desktop/ I then have to press Return a second time to actually send the command. I can modify this behaviour to make it so that pressing Return both selects the completion and sends the command by doing this bindkey -M menuselect '^M' .accept-line However, there's a problem with this: sometimes I need to complete a file or directory without sending the command. For example, I might need to do ln -s Desktop Desktop2 — with this bindkey behaviour, trying to complete Desktop will result in ln -s Desktop/ being sent as the command, and obviously I don't want that. I'm aware that just pressing space will let me get on with the command, but it's now a habit. Given this, is there a way to make it so that only some commands let you press Return once (like cd), but all other commands require pressing it twice?

    Read the article

  • Customizing tmux status to represent current working directory and files

    - by user69397
    I've been playing with this for a couple of days, so I'm sure I'm missing something simple. Love tmux. Using it for development and have so many windows I need a better way of distinguishing them in the status bar and in the buffer list. Seeing a list of "bash" and "vim" isn't really helpful at all. And since they're all on the same host - don't care about the hostname right now. I'd like to show the current working directory, and the file being worked on. For example when I view the list of buffers I currently see: (0) 0: vim [100x44] (1 panes) "murph" (1) 1: vim [100x44] (1 panes) "murph" (2) 2: bash- [100x44] (1 panes) "murph" (3) 3: bash* [100x44] (1 panes) "murph" Here's what I'd like to see 0:vim main.py ~/devl/project1 1:vim index.html ~/devl/samples/staticfiles 2:bash ~/devl/sandbox 3:bash ~/.vimrc I'd like to see similar info in the status bar for each individual window. While I am able to get PWD to show up in the status bar of a window, it's only the working directory from where tmux was launched. This isn't any help as I change directories. I'm hoping this can be done without a bunch of scripts. Thanks all.

    Read the article

  • Google Sites (via Apps) setup questions

    - by Dave
    I thought that it would be a piece of cake to set up a Google site via Google Apps, but perhaps my previous (limited) experience with web development has given me unrealistic expectations. I have actually had a really tough time finding help with the exact question that I have, which is: How do I change the home page contents??? You see, I'm used to having hosting with someone like GoDaddy, where I can just ftp in and drop my HTML files in the www folder. From research I have found that this is simply not possible with any flavor of Google Sites. That's fine, I can live with it. So let's say I have www.mydomain.com. When I hit that URL, it redirects me to a very long URL (unfortunately) like https://sites.google.com/a/mydomain.com/sites/system/app/pages/meta/domainWelcome, which just says: Google Apps Welcome to mydomain.com If you are the domain administrator get started creating your home page with Google Sites Great! I want to do that. So I click on the "If you are the..." link and end up at a screen where I can choose a template, a name, and some visibility options. If I click on My Sites, there isn't a "default" site, i.e. the one that www.mydomain.com displays. I figured that maybe I just have to create a site first, so I went ahead and did that. My first test was to create a site that was publicly accessible. I thought that maybe if I did that, the Google would decide that this must be my home page since it's the only one. But it doesn't, and I still get the "Welcome to" page. Under "More Actions", I didn't see anything interesting except for "Manage site". I went in there and had a peek around, and didn't see anything about using this as the default home page. Am I looking for something that just doesn't exist? I can't believe there isn't a way to modify the "domain welcome to" page...

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • How can I extend / create a new partition from the following setup?

    - by Kiada
    I'm a little unsure what to do in this situation. When I try to create a new simple volume from the unallocated space I get an error because I already have 4 partitions. I have no option to extend either my C:\ primary partition or the E:\ logical drive. C:\ - Gaming Win7 install. D:\ - Storage Unallocated Space - Would somehow like to install OSX on a partition from this space. E:\ - Software Development Win7 install. I:\ - Ignore this. It's an external 1TB HDD. Do I have any options that do not involve formatting / losing information on either C:\ or E:\? Thank you. Link to visual disk partitioning setup image. Edit: A bit more information regarding partitions. Firstly, the image linked above is a screenshot of Windows 7 partitioning tool, easier to read than text I guess! H:\ System Reserved: 100MB NTFS C:\ 244 GB NTFS Healthy (Page File, Primary Partition) D:\ 294 GB NTFS Healthy (Primary Partition) E:\ 100 GB NTFS Healthy (Boot, Page File, Crash Dump, Logical Drive) Unallocated 292 GB Hope this helps :)

    Read the article

  • Centos 5.5 install PearDB

    - by John Gardeniers
    Disclaimer: I use Linux for some jobs but I am not a Linux admin. I have a Centos 5.4 machine which performs some server duties and doubles as a web site development machine. PHP 5.3.3 was installed from RPM with the --without-pear option. I now wish to use PearDB but can't figure out how to install it. If I run yum install php-pear-db, it comes back with Error: Missing Dependency: php = 5.1.6-27.el5_5.3 is needed by package php-devel-5.1.6-27.el5_5.3.i386 (updates). The only RPM I've found that looks like it might be close currently has a dead link, so I can't even try that. What would be the best way to go about this? Is there a way to reinstall from the RPM and include pear? Can I install the dependency without breaking the current installation? Should I try to uninstall the original PHP and reinstall it from source, complete with pear? I thought this might have been an SU question but the FAQ over there suggests otherwise.

    Read the article

< Previous Page | 970 971 972 973 974 975 976 977 978 979 980 981  | Next Page >