Search Results

Search found 14639 results on 586 pages for 'coding environment'.

Page 351/586 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Network Share unavailable after DNS Change

    - by Justin Largey
    Hi, I have a server, called Server1 with various network shares on it. Our users map to this share using \\Server1\FileShareName1. During a DR Test, we rerouted all network traffic from Server1 to Server21. All folder shares are set up on Server21. We were hoping the the network shares would still be accessible using \\Server1\FileNameShare1, unfortunately, they are not. Does anyone know why this is happening? This is a Win2003 Environment, and DNS was flushed. I confirmed that IP addresses are matching between the two servers. Any help or insight is much appreciated.

    Read the article

  • I'm trying to setup Xvfb to run an GUI app on a remote server with no display

    - by jz87
    I have a 3rd party java app that I need to run on a remote server. Unfortunately, the app is designed for the desktop and assumes a GUI is available. The thing is I would like to leave this app running on the remote server without having to tie up my desktop machine with a persistent VNC connection to the remote machine. I'm trying to setup Xvfb on the remote machine so emulate a graphical environment, connect to the remote machine via VNC to launch the app and configure parameters and then log off and let it run. Here's what I have so far: I have ubuntu 11.04 server apt-get install xvfb apt-get install fluxbox apt-get install x11vnc Xvfb :1 -screen 0 1024x768x16 & fluxbox & At this point I run into a problem because it gives a very undescriptive error: Cannot connect to server. How do I know if the server is running and that it's running properly?

    Read the article

  • SCCM 2012: How to properly update the content of an application?

    - by Omnomnomnom
    I recently set up a new SCCM 2012 environment at my workplace and now we are creating our applications for distribution. Some applications are set up using a script. When during testing, something was not right and the content of the application needs to be changed. The distribution point keeps on serving the old content to the clients. I was wondering what the proper procedure is for updating the DP's when the content of an application changes. I have tried redistributing to the distribution points and deleting old revisions but to no avail.

    Read the article

  • freebsd dev server on virtualbox over windows

    - by g_kaya
    I need a unixy environment for development purposes. I hate doing things on windows but it is more stable for daily use and I don't have a mac, so I'm having to use windows (7). I want to run freebsd in a virtual machine, configure it to be the localhost server, be able to connect using ssh (within my home-network) and be able to install vbox guest addons. If guest additions aren't the best, I can use solaris or linux flavours. I need no gui. I don't know anything about network stuff, so I need a detailed explanation from vise people here, or a nice doc to read. Edit : To be more specific as requested, I use following on unices: *django 1.4 *apache *python (2.7) *emacs *mysql *probably node.js *bash scripting I use windows to be able to do daily things easily, like connecting to my tablet, browsing and learning java. And I don't want to use linux as my desktop os, beacuse it gets broken a lot, it's annoying to maintain wlan problems and some more.

    Read the article

  • How to configure installed Ruby and gems?

    - by NARKOZ
    My current gem env returns: RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] - INSTALLATION DIRECTORY: /home/USERNAME/.gems - RUBYGEMS PREFIX: /home/narkoz - RUBY EXECUTABLE: /usr/bin/ruby1.8 - EXECUTABLE DIRECTORY: /home/USERNAME/.gems/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/USERNAME/.gems - /usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gempath" => ["/home/USERNAME/.gems", "/usr/lib/ruby/gems/1.8"] - "gemhome" => "/home/USERNAME/.gems" - REMOTE SOURCES: - http://rubygems.org/ How can I change path /home/USERNAME/ to my own without uninstalling? OS: Debian Linux

    Read the article

  • How to set/keep directory permissions?

    - by Dylan
    I'm using CwRsync to connect from my Windows development machine to a linux webserver : rsync -avuz -e ./ssh --exclude=".svn" /cygdrive/c/xampp/htdocs/project123/ [email protected]:/home/user123/public_html This syncs my development project directory nicely and fast to the server. But after doing this, all directory properties are reset to the local user user123 only, so the website is not available anymore. I need to manually reset those properties. Why is this happening, and how to prevent it? PS. coming from a Windows environment I'm having a really hard time understanding rsync. I copied the above command from some examples... just need to get this one small thing working too...

    Read the article

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • Server installation logging / logbook / diary?

    - by The MYYN
    Are there some ways field-tested ways to keep a kind of logbook for a server? Including: software installations (and de-installations) custom configurations (e.g. of a webserver, ssh daemon, etc.) personal notes The big picture. I am preparing a server and would like to extensively document the state and how it was established over time, so that a new person can easily see, what's going on and why. The setup is not too complicated, but I would like to do it anyway. I once used something like Maintain /etc with mercurial on Debian and it was nice, but I am looking for a little more flexible solution. Addendum: So I am interested in logging and documentation first. In an ideal world however, I would like to have a command, which in a few steps would take me from a bare newly installed unix system to a functional environment with all the components setup and in place by the means of, say an 'executable' log. But that would be very ideal, I imagine.

    Read the article

  • Is it possible to cause artificial network packet loss or latency?

    - by nbolton
    I'm trying to reproduce some issues on a deployed application where the MSSQL server and client are running in two separate machines. I think there may be network issues between the two machines, so I'd like to try and reproduce these conditions on two Hyper-V virtual machines (on the same virtual server). Of course, the network for these virtual machines is "local" so it's actually far from the conditions in a live environment. Is there a program I can run on either virtual machine which will degrade the network performance? Or maybe any other work arounds? For example, one way to reproduce the conditions may be to run the VMs on separate Hyper-V servers in geographically dispersed locations (so the SQL traffic goes over VPN or something) -- but this is a little long winded I think. There must be a simpler way.

    Read the article

  • SQL database testing: How to capture state of my database for rollback.

    - by Rising Star
    I have a SQL server (MS SQL 2005) in my development environment. I have a suite of unit tests for some .net code that will connect to the database and perform some operations. If the code under test works correctly, then the database should be in the same (or similar) state to how it was before the tests. However, I would like to be able to roll back the database to its state from before the tests run. One way of doing this would be to programmatically use transactions to roll back each test operation, but this is difficult and cumbersome to program; it could easily lead to errors in the test code. I would like to be able to run my tests confidently knowing that if they destroy my tables, I can quickly restore them? What is a good way to save a snapshot of one of my databases with its tables so that I can easily restore the database to it's state from before the test?

    Read the article

  • Web Content Filtering for Windows Clients

    - by djoyce
    I'm working with a small business to solve a bunch of problems. One is their Windows 7 POS registers need to have web access restricted to only three remote support sites, but the back office machine needs an unfiltered connection. I'd like something I can install and configure on the few registers to block all but those few sites. In a perfect world this would restrict the normal register user, but the admin user would not be filtered. Free is best, if it works, but a small fee would be alright too. Microsoft's Family Safety filter is close, but requires a Windows Live account, which isn't ideal, but may be alright. Anyone use this in a small business environment? I'd prefer something easily managed at the local machines. K9 Web Protection is interesting and I'm going to look into it more. Are there other options? Seems like someone would have made something simple like this as an open source project, but maybe not.

    Read the article

  • Is it a good idea to take onsite/offsite backups of server images?

    - by ServerAdminGuy45
    Assuming a non-virtualized environment it a good idea to take actual images of servers (using something like Acronis True Image) and store them on\off site? Backing up data is great but I feel it would be good to have copies of OS images in the event hardware dies or an upgrade gets botched I can always revert back. What would be your recommended way to do this (preferably using a NAS and an online backup service)? I was talking with the Iron Mountain folks and the service they described is more geared toward taking incremental snapshots of data. I'm not sure if there's a way to backup images in an incremental way such that only the changes between them are saved (that way I'm not wasting X GB each time I take an image).

    Read the article

  • Automated VLAN creation with residential Wireless devices

    - by Zephyr Pellerin
    We've got a few WRT devices from Linksys here, and the issue has arisen to deploy them in a relatively small environment, However, in the interest of manageability we'd like to be able to automatically VLAN (ideally NOT subnet) every user from one another. It seems obvious to me that the default firmware isn't capable of this - can OpenWRT/Tomato/DD-WRT support any sort of functionality such that new users are automatically VLANed or otherwise logically separated from other users? It seems like there's an easy IPtables or PF solution here, but I've been wrong before. (If that seemed a little ambiguous, heres an example) User 1 sends DHCP request to server, new VLAN (We'll call VLAN 1) is created, user is placed in that VLAN. Then, user 2 sends a DHCP request and is placed in VLAN 2 etc. etc.

    Read the article

  • What is your approach to draw a representation of your network ?

    - by Kartoch
    Hello, I'm looking to the community to see how people are drawing their networks, i.e. using symbols to represent complex topology. You can have hardware approach, where every hardware unit are represented. You can also have "entity" approach, where each "service" is shown. Both are interesting but it is difficult to have both on the same schema (but this is needed, especially using virtualization environment). Furthermore, it is difficult to have complex informations on such representation. For instance security parameters (encrypted link, need for authentication) or specific details (protocol type, ports, encapsulation). So my question is: where your are drawing a representation of your network, what is your approach ? Are you using methodology and/or specific softwares ? What is your recommendations for information to put (or not) ? How to deal with the complexity when the network becomes large and/or you want to put a lot of information on it ? Examples and links to good references will be appreciated.

    Read the article

  • What do I put on my developers box in regard to SharePoint 2010

    - by Jisaak
    So we are venturing out into the world of SharePoint and it seems that I have to install SharePoint Server directly on each developer's box. Is this correct? I have SharePoint up and running on a separate sever so it seems redundant to have to install it on each box. Not to mention installing SharePoint on Windows 7 is a pain in the ars. I'm just trying to clarify how to correctly set the environment up. I've been using this link as a guide so far: http://msdn.microsoft.com/en-us/library/ee554869(office.14).aspx Any advice is greatly appreciated!

    Read the article

  • Software RAID underneath ESXi datastore

    - by carlpett
    I'm building an virtual environment for a small business. It is based around a single ESXi 5.1 host, which will host half a dozen or so VMs. I'm having some doubts regarding how to implement the storage though. I naturally want the datastore to be fault tolerant, but I can't get the funds for a separate storage machine, nor expensive hardware RAID solutions, so I would like to use some software RAID (lvm/mdadm, most likely). How can this be implemented? My only idea so far would be to create a VM which has the storage adapter as passthrough, puts some software RAID on top of the disks and then presents the resulting volumes "back" to the ESXi host which then creates a datastore from which other VMs get their storage presented. This does seem kind of round-about, do I have any better options? From my research, passthrough seems to come with quite a few drawbacks, such as no suspend/resume etc.

    Read the article

  • How to rewrite index.php (and other valid default files) to the document root using mod_rewrite?

    - by TMG
    Hello, I would like to redirect index.php, as well as any other valid default file (e.g. index.html, index.asp, etc.) to the document root (which contains index.php) with something like this: RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ / [NC,L] However, this is of course giving me an infinite redirect loop. What's the right way to do this? If possible, I'd like to have this work in both the development and production environment, so I don't want to specify an explicit url like http://www.mysite.com/ as the target. Thanks!

    Read the article

  • How to create launch icons on Ubuntu Desktop?

    - by MattSlay
    I installed Ubuntu in a VM using VirtualBox, and after a few hours of Googling all the things I had to install and configure for the rest of the environment, I am finally up and running with a Ruby on Rails IDE and have MySql up and running. After I start up the Ubuntu VM, I have to go to a terminal window and do this to start MySql: /etc/init.d/mysql start So that works fine, but, since I am such a GUI person, I’m wondering how I can create an icon on the Ubuntu Desktop that I can click on to launch this command. Can you tell me how to do that?

    Read the article

  • Map localhost to IP address on Windows XP & Internet Explorer 7+?

    - by roblocop
    I'm trying to map 'localhost' to an IP address elsewhere on the network, say '10.0.1.1' for example. I've tried editing my hosts file, changing the entry from: 127.0.0.1 localhost to 10.0.1.1 localhost with no luck. The closest I've gotten is using DNS spoofing via Charles. Adding a DNS spoof entry mapping the host name 'localhost' to '10.0.1.1' works fine in Firefox, but fails in Internet Explorer, basically showing IE's 404 page. I'm wondering if there's some specific setting or way I can get DNS spoofing to work in IE? The main issue I'm trying to resolve is that our development environment points to 'localhost' and rather than setting the dev env up in a legacy Windows laptop to try and debug, point to a server that has it all setup and I can make the changes remotely.

    Read the article

  • Trac permission denied for SVN repo

    - by plesatejvlk
    I'm running Apache2,SVN & Trac on OpenSUSE. SVN works like a charm. I've initialized trac environment for one of my SVN repositories for trac to show source code in it's repo browser and I set the repository up in the Trac web admin. I also ran the trac-admin resync for that repo without problems. Trouble is when I open the Trac repo browser I get: "can't open file: /srv/svn/repos/myrepo/format, access denied)" error. I checked the permissions and: apache runs as wwwrun tracd runs as wwwrun the whole subtree /srv/svn/... belongs to svn group and the group has rw perms all the way down to the "format" file wwwrun is in the svn group I also did the perms check: $ sudo -u wwwrun cat /srv/svn/repos/myrepo/format and got it printed out without trouble. So in my opinion there shoud not be any permission conflict. Any idea what else to check? Thanks in advance!

    Read the article

  • How to move mail accounts when migrating webhosting

    - by pkswatch
    I am migrating my website abc.com from one webhosting company to another in a shared hosting environment. Both have cpanel. And the second hosting account i am preparing to move is my multi-domain hosting account with 3 domains already in it. The problem is, i have many email accounts associated with my website abc.com, which are accessed using webmail. So if i move it to the other host, will i lose all those accounts and their emails? If yes, then how should i synchronise the email accounts so that all the accounts and the contained emails remain intact? I saw some several sync tools like IMAP Sync, etc. But these require two hosts while synchronizing, and as you see, i have just one domain name to be synchronized over 2 servers. PS, i do not have any ssh access on either of them, and i have made complete backup of all files using backup wizard in cpanel.

    Read the article

  • transparently set up Windows 7 as remote workstation

    - by Áxel
    Maybe is a very basic question, but I can't find the exact terms to Google for it and find the concrete answer to my doubt. Suppose we have several PCs in which individual employees work. One of them has an extremely powerful CPU, and it's very useful to use that computer to perform heavy computations, but go there and set up your task means its user has to stop working for a while. Is it possible to allow a secondary user account to remotly log in, for example via Remote Desktop, and work with a full user environment, while the main user keeps working under his user session? I've used remote desktop many times in the past, but it always blocked current user session, or even terminated it. Lots of thanks in advance guys.

    Read the article

  • Security considerations in providing VPN access to non-company issued computers [migrated]

    - by DKNUCKLES
    There have been a few people at my office that have requested the installation of DropBox on their computers to synchronize files so they can work on them at home. I have always been wary about cloud computing, mainly because we are a Canadian company and enjoy the privacy and being outside the reach of the Patriot Act. The policy before I started was that employees with company issued notebooks could be issued a VPN account, and everyone else had to have a remote desktop connection. The theory behind this logic (as I understand it) was that we had the potential to lock down the notebooks whereas the employees home computers were outside of our grasp. We had no ability to ensure they weren't running as administrator all the time / were running AV so they were a higher risk at being infected with malware and could compromise network security. With the increase in people wanting DropBox I'm curious as to whether or not this policy is too restrictive and overly paranoid. Is it generally safe to provide VPN access to an employee without knowing what their computing environment looks like?

    Read the article

  • SharePoint Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • SharePoint 2010 MySites - Host on separate servers

    - by Chris W
    We're playing with the SP 2010 Beta ahead of a planned deployment later this year in an academic environment. We anticipate that the majority of traffic will be through MySites when everything is provisioned so we're looking at how we can plan our SP topology to scale nicely. An initial thought is to run the main portal on one server, host "Student" MySites on one server and "Staff" on another. Is it acutally possible to do this easily or are we going down a bad path? Specifically - can we have 2 different MySites site collections, each hosted on a dedicated server? If so, can we configure SharePoint to work out from the users's logon account type of user they are and route them to the correct server?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >